Test Report: KVM_Linux_crio 18995

                    
                      b1be302e2d7c5dc26631b6d9b0f739ac85738fce:2024-05-31:34705
                    
                

Test fail (13/210)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-705710 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-705710 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.958033222s)

                                                
                                                
-- stdout --
	* [addons-705710] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-705710" primary control-plane node in "addons-705710" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	  - Using image docker.io/marcnuri/yakd:0.0.4
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	  - Using image docker.io/busybox:stable
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	  - Using image ghcr.io/helm/tiller:v2.17.0
	* Verifying ingress addon...
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-705710 service yakd-dashboard -n yakd-dashboard
	
	* Verifying registry addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying csi-hostpath-driver addon...
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-705710 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: nvidia-device-plugin, metrics-server, storage-provisioner, inspektor-gadget, cloud-spanner, ingress-dns, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 17:56:32.950212   19117 out.go:291] Setting OutFile to fd 1 ...
	I0531 17:56:32.950440   19117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:32.950448   19117 out.go:304] Setting ErrFile to fd 2...
	I0531 17:56:32.950453   19117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:32.950650   19117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 17:56:32.951194   19117 out.go:298] Setting JSON to false
	I0531 17:56:32.951984   19117 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2339,"bootTime":1717175854,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:56:32.952039   19117 start.go:139] virtualization: kvm guest
	I0531 17:56:32.954094   19117 out.go:177] * [addons-705710] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:56:32.955466   19117 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 17:56:32.956612   19117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:56:32.955468   19117 notify.go:220] Checking for updates...
	I0531 17:56:32.959042   19117 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 17:56:32.960257   19117 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 17:56:32.961460   19117 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 17:56:32.962628   19117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 17:56:32.963972   19117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 17:56:32.994161   19117 out.go:177] * Using the kvm2 driver based on user configuration
	I0531 17:56:32.995165   19117 start.go:297] selected driver: kvm2
	I0531 17:56:32.995180   19117 start.go:901] validating driver "kvm2" against <nil>
	I0531 17:56:32.995189   19117 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 17:56:32.995834   19117 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:32.995902   19117 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 17:56:33.009639   19117 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 17:56:33.009676   19117 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0531 17:56:33.009853   19117 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 17:56:33.009904   19117 cni.go:84] Creating CNI manager for ""
	I0531 17:56:33.009916   19117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 17:56:33.009924   19117 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0531 17:56:33.009960   19117 start.go:340] cluster config:
	{Name:addons-705710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:addons-705710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 17:56:33.010051   19117 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:33.011640   19117 out.go:177] * Starting "addons-705710" primary control-plane node in "addons-705710" cluster
	I0531 17:56:33.012831   19117 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 17:56:33.012862   19117 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 17:56:33.012870   19117 cache.go:56] Caching tarball of preloaded images
	I0531 17:56:33.012953   19117 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 17:56:33.012981   19117 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 17:56:33.013263   19117 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/config.json ...
	I0531 17:56:33.013284   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/config.json: {Name:mke31b1de0ad7536292e9956622ebfa86b596f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:56:33.013406   19117 start.go:360] acquireMachinesLock for addons-705710: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 17:56:33.013448   19117 start.go:364] duration metric: took 29.538µs to acquireMachinesLock for "addons-705710"
	I0531 17:56:33.013464   19117 start.go:93] Provisioning new machine with config: &{Name:addons-705710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:addons-705710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 17:56:33.013521   19117 start.go:125] createHost starting for "" (driver="kvm2")
	I0531 17:56:33.015084   19117 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0531 17:56:33.015211   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:56:33.015248   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:56:33.029438   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33719
	I0531 17:56:33.029816   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:56:33.030354   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:56:33.030383   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:56:33.030678   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:56:33.030841   19117 main.go:141] libmachine: (addons-705710) Calling .GetMachineName
	I0531 17:56:33.030978   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:56:33.031113   19117 start.go:159] libmachine.API.Create for "addons-705710" (driver="kvm2")
	I0531 17:56:33.031140   19117 client.go:168] LocalClient.Create starting
	I0531 17:56:33.031174   19117 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 17:56:33.136920   19117 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 17:56:33.501163   19117 main.go:141] libmachine: Running pre-create checks...
	I0531 17:56:33.501186   19117 main.go:141] libmachine: (addons-705710) Calling .PreCreateCheck
	I0531 17:56:33.501724   19117 main.go:141] libmachine: (addons-705710) Calling .GetConfigRaw
	I0531 17:56:33.502091   19117 main.go:141] libmachine: Creating machine...
	I0531 17:56:33.502103   19117 main.go:141] libmachine: (addons-705710) Calling .Create
	I0531 17:56:33.502271   19117 main.go:141] libmachine: (addons-705710) Creating KVM machine...
	I0531 17:56:33.503464   19117 main.go:141] libmachine: (addons-705710) DBG | found existing default KVM network
	I0531 17:56:33.504146   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:33.504020   19139 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0531 17:56:33.504172   19117 main.go:141] libmachine: (addons-705710) DBG | created network xml: 
	I0531 17:56:33.504186   19117 main.go:141] libmachine: (addons-705710) DBG | <network>
	I0531 17:56:33.504196   19117 main.go:141] libmachine: (addons-705710) DBG |   <name>mk-addons-705710</name>
	I0531 17:56:33.504209   19117 main.go:141] libmachine: (addons-705710) DBG |   <dns enable='no'/>
	I0531 17:56:33.504217   19117 main.go:141] libmachine: (addons-705710) DBG |   
	I0531 17:56:33.504232   19117 main.go:141] libmachine: (addons-705710) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0531 17:56:33.504241   19117 main.go:141] libmachine: (addons-705710) DBG |     <dhcp>
	I0531 17:56:33.504247   19117 main.go:141] libmachine: (addons-705710) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0531 17:56:33.504252   19117 main.go:141] libmachine: (addons-705710) DBG |     </dhcp>
	I0531 17:56:33.504259   19117 main.go:141] libmachine: (addons-705710) DBG |   </ip>
	I0531 17:56:33.504265   19117 main.go:141] libmachine: (addons-705710) DBG |   
	I0531 17:56:33.504271   19117 main.go:141] libmachine: (addons-705710) DBG | </network>
	I0531 17:56:33.504278   19117 main.go:141] libmachine: (addons-705710) DBG | 
	I0531 17:56:33.509263   19117 main.go:141] libmachine: (addons-705710) DBG | trying to create private KVM network mk-addons-705710 192.168.39.0/24...
	I0531 17:56:33.567449   19117 main.go:141] libmachine: (addons-705710) DBG | private KVM network mk-addons-705710 192.168.39.0/24 created
	I0531 17:56:33.567482   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:33.567397   19139 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 17:56:33.567503   19117 main.go:141] libmachine: (addons-705710) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710 ...
	I0531 17:56:33.567521   19117 main.go:141] libmachine: (addons-705710) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 17:56:33.567537   19117 main.go:141] libmachine: (addons-705710) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 17:56:33.801035   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:33.800874   19139 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa...
	I0531 17:56:34.243317   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:34.243173   19139 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/addons-705710.rawdisk...
	I0531 17:56:34.243352   19117 main.go:141] libmachine: (addons-705710) DBG | Writing magic tar header
	I0531 17:56:34.243371   19117 main.go:141] libmachine: (addons-705710) DBG | Writing SSH key tar header
	I0531 17:56:34.243395   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:34.243280   19139 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710 ...
	I0531 17:56:34.243413   19117 main.go:141] libmachine: (addons-705710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710
	I0531 17:56:34.243421   19117 main.go:141] libmachine: (addons-705710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 17:56:34.243431   19117 main.go:141] libmachine: (addons-705710) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710 (perms=drwx------)
	I0531 17:56:34.243465   19117 main.go:141] libmachine: (addons-705710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 17:56:34.243494   19117 main.go:141] libmachine: (addons-705710) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 17:56:34.243504   19117 main.go:141] libmachine: (addons-705710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 17:56:34.243519   19117 main.go:141] libmachine: (addons-705710) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 17:56:34.243530   19117 main.go:141] libmachine: (addons-705710) DBG | Checking permissions on dir: /home/jenkins
	I0531 17:56:34.243541   19117 main.go:141] libmachine: (addons-705710) DBG | Checking permissions on dir: /home
	I0531 17:56:34.243552   19117 main.go:141] libmachine: (addons-705710) DBG | Skipping /home - not owner
	I0531 17:56:34.243577   19117 main.go:141] libmachine: (addons-705710) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 17:56:34.243598   19117 main.go:141] libmachine: (addons-705710) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 17:56:34.243612   19117 main.go:141] libmachine: (addons-705710) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 17:56:34.243624   19117 main.go:141] libmachine: (addons-705710) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 17:56:34.243636   19117 main.go:141] libmachine: (addons-705710) Creating domain...
	I0531 17:56:34.244526   19117 main.go:141] libmachine: (addons-705710) define libvirt domain using xml: 
	I0531 17:56:34.244548   19117 main.go:141] libmachine: (addons-705710) <domain type='kvm'>
	I0531 17:56:34.244558   19117 main.go:141] libmachine: (addons-705710)   <name>addons-705710</name>
	I0531 17:56:34.244568   19117 main.go:141] libmachine: (addons-705710)   <memory unit='MiB'>4000</memory>
	I0531 17:56:34.244602   19117 main.go:141] libmachine: (addons-705710)   <vcpu>2</vcpu>
	I0531 17:56:34.244619   19117 main.go:141] libmachine: (addons-705710)   <features>
	I0531 17:56:34.244628   19117 main.go:141] libmachine: (addons-705710)     <acpi/>
	I0531 17:56:34.244632   19117 main.go:141] libmachine: (addons-705710)     <apic/>
	I0531 17:56:34.244637   19117 main.go:141] libmachine: (addons-705710)     <pae/>
	I0531 17:56:34.244644   19117 main.go:141] libmachine: (addons-705710)     
	I0531 17:56:34.244650   19117 main.go:141] libmachine: (addons-705710)   </features>
	I0531 17:56:34.244657   19117 main.go:141] libmachine: (addons-705710)   <cpu mode='host-passthrough'>
	I0531 17:56:34.244676   19117 main.go:141] libmachine: (addons-705710)   
	I0531 17:56:34.244692   19117 main.go:141] libmachine: (addons-705710)   </cpu>
	I0531 17:56:34.244704   19117 main.go:141] libmachine: (addons-705710)   <os>
	I0531 17:56:34.244716   19117 main.go:141] libmachine: (addons-705710)     <type>hvm</type>
	I0531 17:56:34.244726   19117 main.go:141] libmachine: (addons-705710)     <boot dev='cdrom'/>
	I0531 17:56:34.244735   19117 main.go:141] libmachine: (addons-705710)     <boot dev='hd'/>
	I0531 17:56:34.244748   19117 main.go:141] libmachine: (addons-705710)     <bootmenu enable='no'/>
	I0531 17:56:34.244759   19117 main.go:141] libmachine: (addons-705710)   </os>
	I0531 17:56:34.244772   19117 main.go:141] libmachine: (addons-705710)   <devices>
	I0531 17:56:34.244783   19117 main.go:141] libmachine: (addons-705710)     <disk type='file' device='cdrom'>
	I0531 17:56:34.244817   19117 main.go:141] libmachine: (addons-705710)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/boot2docker.iso'/>
	I0531 17:56:34.244838   19117 main.go:141] libmachine: (addons-705710)       <target dev='hdc' bus='scsi'/>
	I0531 17:56:34.244853   19117 main.go:141] libmachine: (addons-705710)       <readonly/>
	I0531 17:56:34.244863   19117 main.go:141] libmachine: (addons-705710)     </disk>
	I0531 17:56:34.244876   19117 main.go:141] libmachine: (addons-705710)     <disk type='file' device='disk'>
	I0531 17:56:34.244888   19117 main.go:141] libmachine: (addons-705710)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 17:56:34.244902   19117 main.go:141] libmachine: (addons-705710)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/addons-705710.rawdisk'/>
	I0531 17:56:34.244917   19117 main.go:141] libmachine: (addons-705710)       <target dev='hda' bus='virtio'/>
	I0531 17:56:34.244930   19117 main.go:141] libmachine: (addons-705710)     </disk>
	I0531 17:56:34.244941   19117 main.go:141] libmachine: (addons-705710)     <interface type='network'>
	I0531 17:56:34.244976   19117 main.go:141] libmachine: (addons-705710)       <source network='mk-addons-705710'/>
	I0531 17:56:34.244991   19117 main.go:141] libmachine: (addons-705710)       <model type='virtio'/>
	I0531 17:56:34.245005   19117 main.go:141] libmachine: (addons-705710)     </interface>
	I0531 17:56:34.245015   19117 main.go:141] libmachine: (addons-705710)     <interface type='network'>
	I0531 17:56:34.245028   19117 main.go:141] libmachine: (addons-705710)       <source network='default'/>
	I0531 17:56:34.245038   19117 main.go:141] libmachine: (addons-705710)       <model type='virtio'/>
	I0531 17:56:34.245049   19117 main.go:141] libmachine: (addons-705710)     </interface>
	I0531 17:56:34.245058   19117 main.go:141] libmachine: (addons-705710)     <serial type='pty'>
	I0531 17:56:34.245067   19117 main.go:141] libmachine: (addons-705710)       <target port='0'/>
	I0531 17:56:34.245079   19117 main.go:141] libmachine: (addons-705710)     </serial>
	I0531 17:56:34.245092   19117 main.go:141] libmachine: (addons-705710)     <console type='pty'>
	I0531 17:56:34.245105   19117 main.go:141] libmachine: (addons-705710)       <target type='serial' port='0'/>
	I0531 17:56:34.245118   19117 main.go:141] libmachine: (addons-705710)     </console>
	I0531 17:56:34.245129   19117 main.go:141] libmachine: (addons-705710)     <rng model='virtio'>
	I0531 17:56:34.245141   19117 main.go:141] libmachine: (addons-705710)       <backend model='random'>/dev/random</backend>
	I0531 17:56:34.245154   19117 main.go:141] libmachine: (addons-705710)     </rng>
	I0531 17:56:34.245163   19117 main.go:141] libmachine: (addons-705710)     
	I0531 17:56:34.245171   19117 main.go:141] libmachine: (addons-705710)     
	I0531 17:56:34.245183   19117 main.go:141] libmachine: (addons-705710)   </devices>
	I0531 17:56:34.245194   19117 main.go:141] libmachine: (addons-705710) </domain>
	I0531 17:56:34.245205   19117 main.go:141] libmachine: (addons-705710) 
	I0531 17:56:34.250897   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:a1:b1:65 in network default
	I0531 17:56:34.251418   19117 main.go:141] libmachine: (addons-705710) Ensuring networks are active...
	I0531 17:56:34.251435   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:34.252011   19117 main.go:141] libmachine: (addons-705710) Ensuring network default is active
	I0531 17:56:34.252360   19117 main.go:141] libmachine: (addons-705710) Ensuring network mk-addons-705710 is active
	I0531 17:56:34.254014   19117 main.go:141] libmachine: (addons-705710) Getting domain xml...
	I0531 17:56:34.254647   19117 main.go:141] libmachine: (addons-705710) Creating domain...
	I0531 17:56:35.612095   19117 main.go:141] libmachine: (addons-705710) Waiting to get IP...
	I0531 17:56:35.612950   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:35.613353   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:35.613425   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:35.613367   19139 retry.go:31] will retry after 216.479412ms: waiting for machine to come up
	I0531 17:56:35.831742   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:35.832162   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:35.832183   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:35.832114   19139 retry.go:31] will retry after 261.120659ms: waiting for machine to come up
	I0531 17:56:36.094529   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:36.094987   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:36.095023   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:36.094940   19139 retry.go:31] will retry after 338.247437ms: waiting for machine to come up
	I0531 17:56:36.434308   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:36.434711   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:36.434739   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:36.434698   19139 retry.go:31] will retry after 428.031993ms: waiting for machine to come up
	I0531 17:56:36.864185   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:36.864618   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:36.864646   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:36.864539   19139 retry.go:31] will retry after 690.836212ms: waiting for machine to come up
	I0531 17:56:37.557422   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:37.557777   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:37.557802   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:37.557742   19139 retry.go:31] will retry after 825.009218ms: waiting for machine to come up
	I0531 17:56:38.384134   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:38.384543   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:38.384572   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:38.384473   19139 retry.go:31] will retry after 1.151397008s: waiting for machine to come up
	I0531 17:56:39.537829   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:39.538191   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:39.538218   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:39.538147   19139 retry.go:31] will retry after 941.081214ms: waiting for machine to come up
	I0531 17:56:40.481165   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:40.481533   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:40.481562   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:40.481481   19139 retry.go:31] will retry after 1.137429694s: waiting for machine to come up
	I0531 17:56:41.620151   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:41.620593   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:41.620623   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:41.620544   19139 retry.go:31] will retry after 1.733666049s: waiting for machine to come up
	I0531 17:56:43.356093   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:43.356504   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:43.356533   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:43.356462   19139 retry.go:31] will retry after 2.599781016s: waiting for machine to come up
	I0531 17:56:45.957789   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:45.958240   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:45.958263   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:45.958194   19139 retry.go:31] will retry after 3.425382622s: waiting for machine to come up
	I0531 17:56:49.384993   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:49.385478   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:49.385502   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:49.385438   19139 retry.go:31] will retry after 3.420524397s: waiting for machine to come up
	I0531 17:56:52.809879   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:52.810203   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find current IP address of domain addons-705710 in network mk-addons-705710
	I0531 17:56:52.810225   19117 main.go:141] libmachine: (addons-705710) DBG | I0531 17:56:52.810164   19139 retry.go:31] will retry after 5.488839965s: waiting for machine to come up
	I0531 17:56:58.300187   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.300574   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has current primary IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.300590   19117 main.go:141] libmachine: (addons-705710) Found IP for machine: 192.168.39.124
	I0531 17:56:58.300600   19117 main.go:141] libmachine: (addons-705710) Reserving static IP address...
	I0531 17:56:58.301014   19117 main.go:141] libmachine: (addons-705710) DBG | unable to find host DHCP lease matching {name: "addons-705710", mac: "52:54:00:3f:a6:a7", ip: "192.168.39.124"} in network mk-addons-705710
	I0531 17:56:58.365949   19117 main.go:141] libmachine: (addons-705710) DBG | Getting to WaitForSSH function...
	I0531 17:56:58.365975   19117 main.go:141] libmachine: (addons-705710) Reserved static IP address: 192.168.39.124
	I0531 17:56:58.365993   19117 main.go:141] libmachine: (addons-705710) Waiting for SSH to be available...
	I0531 17:56:58.368446   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.368875   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:58.368904   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.369063   19117 main.go:141] libmachine: (addons-705710) DBG | Using SSH client type: external
	I0531 17:56:58.369090   19117 main.go:141] libmachine: (addons-705710) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa (-rw-------)
	I0531 17:56:58.369121   19117 main.go:141] libmachine: (addons-705710) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.124 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 17:56:58.369139   19117 main.go:141] libmachine: (addons-705710) DBG | About to run SSH command:
	I0531 17:56:58.369156   19117 main.go:141] libmachine: (addons-705710) DBG | exit 0
	I0531 17:56:58.500518   19117 main.go:141] libmachine: (addons-705710) DBG | SSH cmd err, output: <nil>: 
	I0531 17:56:58.500811   19117 main.go:141] libmachine: (addons-705710) KVM machine creation complete!
	I0531 17:56:58.501077   19117 main.go:141] libmachine: (addons-705710) Calling .GetConfigRaw
	I0531 17:56:58.501658   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:56:58.501844   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:56:58.501975   19117 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 17:56:58.501996   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:56:58.503364   19117 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 17:56:58.503379   19117 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 17:56:58.503385   19117 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 17:56:58.503391   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:58.505711   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.506009   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:58.506032   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.506156   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:58.506319   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:58.506463   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:58.506588   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:58.506748   19117 main.go:141] libmachine: Using SSH client type: native
	I0531 17:56:58.506930   19117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I0531 17:56:58.506943   19117 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 17:56:58.612158   19117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:56:58.612181   19117 main.go:141] libmachine: Detecting the provisioner...
	I0531 17:56:58.612189   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:58.614713   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.615046   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:58.615067   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.615194   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:58.615401   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:58.615532   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:58.615631   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:58.615816   19117 main.go:141] libmachine: Using SSH client type: native
	I0531 17:56:58.615980   19117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I0531 17:56:58.615991   19117 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 17:56:58.725818   19117 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 17:56:58.725894   19117 main.go:141] libmachine: found compatible host: buildroot
	I0531 17:56:58.725907   19117 main.go:141] libmachine: Provisioning with buildroot...
	I0531 17:56:58.725920   19117 main.go:141] libmachine: (addons-705710) Calling .GetMachineName
	I0531 17:56:58.726132   19117 buildroot.go:166] provisioning hostname "addons-705710"
	I0531 17:56:58.726158   19117 main.go:141] libmachine: (addons-705710) Calling .GetMachineName
	I0531 17:56:58.726327   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:58.728645   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.729066   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:58.729095   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.729212   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:58.729376   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:58.729541   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:58.729692   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:58.729863   19117 main.go:141] libmachine: Using SSH client type: native
	I0531 17:56:58.730040   19117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I0531 17:56:58.730051   19117 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-705710 && echo "addons-705710" | sudo tee /etc/hostname
	I0531 17:56:58.852561   19117 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-705710
	
	I0531 17:56:58.852582   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:58.855269   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.855602   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:58.855625   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.855818   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:58.855986   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:58.856122   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:58.856281   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:58.856469   19117 main.go:141] libmachine: Using SSH client type: native
	I0531 17:56:58.856654   19117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I0531 17:56:58.856677   19117 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-705710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-705710/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-705710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 17:56:58.973828   19117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 17:56:58.973854   19117 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 17:56:58.973877   19117 buildroot.go:174] setting up certificates
	I0531 17:56:58.973891   19117 provision.go:84] configureAuth start
	I0531 17:56:58.973904   19117 main.go:141] libmachine: (addons-705710) Calling .GetMachineName
	I0531 17:56:58.974141   19117 main.go:141] libmachine: (addons-705710) Calling .GetIP
	I0531 17:56:58.976628   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.976933   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:58.976976   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.977108   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:58.979920   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.980233   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:58.980259   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:58.980372   19117 provision.go:143] copyHostCerts
	I0531 17:56:58.980445   19117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 17:56:58.980566   19117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 17:56:58.980624   19117 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 17:56:58.980669   19117 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.addons-705710 san=[127.0.0.1 192.168.39.124 addons-705710 localhost minikube]
	I0531 17:56:59.320469   19117 provision.go:177] copyRemoteCerts
	I0531 17:56:59.320527   19117 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 17:56:59.320549   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:59.323274   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.323583   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.323607   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.323806   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:59.324009   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:59.324139   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:59.324274   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:56:59.407074   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 17:56:59.430360   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0531 17:56:59.452815   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 17:56:59.475858   19117 provision.go:87] duration metric: took 501.95398ms to configureAuth
	I0531 17:56:59.475883   19117 buildroot.go:189] setting minikube options for container-runtime
	I0531 17:56:59.476028   19117 config.go:182] Loaded profile config "addons-705710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 17:56:59.476092   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:59.478531   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.478831   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.478860   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.478968   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:59.479143   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:59.479324   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:59.479433   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:59.479613   19117 main.go:141] libmachine: Using SSH client type: native
	I0531 17:56:59.479814   19117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I0531 17:56:59.479836   19117 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 17:56:59.733141   19117 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 17:56:59.733165   19117 main.go:141] libmachine: Checking connection to Docker...
	I0531 17:56:59.733172   19117 main.go:141] libmachine: (addons-705710) Calling .GetURL
	I0531 17:56:59.734329   19117 main.go:141] libmachine: (addons-705710) DBG | Using libvirt version 6000000
	I0531 17:56:59.736625   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.736996   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.737024   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.737170   19117 main.go:141] libmachine: Docker is up and running!
	I0531 17:56:59.737184   19117 main.go:141] libmachine: Reticulating splines...
	I0531 17:56:59.737191   19117 client.go:171] duration metric: took 26.706044013s to LocalClient.Create
	I0531 17:56:59.737215   19117 start.go:167] duration metric: took 26.706102438s to libmachine.API.Create "addons-705710"
	I0531 17:56:59.737227   19117 start.go:293] postStartSetup for "addons-705710" (driver="kvm2")
	I0531 17:56:59.737239   19117 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 17:56:59.737263   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:56:59.737488   19117 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 17:56:59.737512   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:59.739649   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.739929   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.739972   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.740101   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:59.740279   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:59.740427   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:59.740552   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:56:59.823707   19117 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 17:56:59.827815   19117 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 17:56:59.827837   19117 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 17:56:59.827909   19117 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 17:56:59.827945   19117 start.go:296] duration metric: took 90.708133ms for postStartSetup
	I0531 17:56:59.827984   19117 main.go:141] libmachine: (addons-705710) Calling .GetConfigRaw
	I0531 17:56:59.828557   19117 main.go:141] libmachine: (addons-705710) Calling .GetIP
	I0531 17:56:59.830913   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.831252   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.831282   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.831487   19117 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/config.json ...
	I0531 17:56:59.831648   19117 start.go:128] duration metric: took 26.818118093s to createHost
	I0531 17:56:59.831667   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:59.833961   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.834294   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.834325   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.834478   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:59.834749   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:59.834895   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:59.835017   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:59.835137   19117 main.go:141] libmachine: Using SSH client type: native
	I0531 17:56:59.835282   19117 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.124 22 <nil> <nil>}
	I0531 17:56:59.835292   19117 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0531 17:56:59.941265   19117 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717178219.919746132
	
	I0531 17:56:59.941284   19117 fix.go:216] guest clock: 1717178219.919746132
	I0531 17:56:59.941292   19117 fix.go:229] Guest: 2024-05-31 17:56:59.919746132 +0000 UTC Remote: 2024-05-31 17:56:59.831657496 +0000 UTC m=+26.915742279 (delta=88.088636ms)
	I0531 17:56:59.941349   19117 fix.go:200] guest clock delta is within tolerance: 88.088636ms
	I0531 17:56:59.941360   19117 start.go:83] releasing machines lock for "addons-705710", held for 26.927901765s
	I0531 17:56:59.941387   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:56:59.941667   19117 main.go:141] libmachine: (addons-705710) Calling .GetIP
	I0531 17:56:59.944455   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.944810   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.944837   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.944985   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:56:59.945472   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:56:59.945654   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:56:59.945738   19117 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 17:56:59.945782   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:59.945844   19117 ssh_runner.go:195] Run: cat /version.json
	I0531 17:56:59.945872   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:56:59.948335   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.948598   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.948661   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.948685   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.948835   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:59.949042   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:59.949192   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:59.949334   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:56:59.949456   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:56:59.949481   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:56:59.949707   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:56:59.949860   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:56:59.950030   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:56:59.950177   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:00.049674   19117 ssh_runner.go:195] Run: systemctl --version
	I0531 17:57:00.055472   19117 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 17:57:00.215683   19117 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 17:57:00.221658   19117 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 17:57:00.221710   19117 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 17:57:00.237673   19117 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 17:57:00.237690   19117 start.go:494] detecting cgroup driver to use...
	I0531 17:57:00.237734   19117 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 17:57:00.253417   19117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 17:57:00.266970   19117 docker.go:217] disabling cri-docker service (if available) ...
	I0531 17:57:00.267024   19117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 17:57:00.280260   19117 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 17:57:00.293493   19117 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 17:57:00.413183   19117 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 17:57:00.538747   19117 docker.go:233] disabling docker service ...
	I0531 17:57:00.538813   19117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 17:57:00.552380   19117 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 17:57:00.564865   19117 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 17:57:00.699818   19117 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 17:57:00.817988   19117 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 17:57:00.831692   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 17:57:00.849263   19117 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 17:57:00.849328   19117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 17:57:00.859754   19117 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 17:57:00.859811   19117 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 17:57:00.870372   19117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 17:57:00.880689   19117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 17:57:00.890889   19117 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 17:57:00.901658   19117 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 17:57:00.911948   19117 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 17:57:00.928358   19117 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 17:57:00.938627   19117 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 17:57:00.947700   19117 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 17:57:00.947747   19117 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 17:57:00.960764   19117 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 17:57:00.969884   19117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:57:01.080826   19117 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 17:57:01.213554   19117 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 17:57:01.213647   19117 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 17:57:01.218272   19117 start.go:562] Will wait 60s for crictl version
	I0531 17:57:01.218327   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:57:01.222058   19117 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 17:57:01.262000   19117 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 17:57:01.262113   19117 ssh_runner.go:195] Run: crio --version
	I0531 17:57:01.289655   19117 ssh_runner.go:195] Run: crio --version
	I0531 17:57:01.318010   19117 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 17:57:01.319248   19117 main.go:141] libmachine: (addons-705710) Calling .GetIP
	I0531 17:57:01.321884   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:01.322239   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:01.322259   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:01.322470   19117 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 17:57:01.326602   19117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:57:01.339402   19117 kubeadm.go:877] updating cluster {Name:addons-705710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:addons-705710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 17:57:01.339487   19117 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 17:57:01.339521   19117 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:57:01.371230   19117 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0531 17:57:01.371298   19117 ssh_runner.go:195] Run: which lz4
	I0531 17:57:01.375013   19117 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0531 17:57:01.379011   19117 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0531 17:57:01.379031   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0531 17:57:02.645320   19117 crio.go:462] duration metric: took 1.270351219s to copy over tarball
	I0531 17:57:02.645407   19117 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0531 17:57:04.873644   19117 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.22819947s)
	I0531 17:57:04.873679   19117 crio.go:469] duration metric: took 2.228334504s to extract the tarball
	I0531 17:57:04.873688   19117 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0531 17:57:04.910729   19117 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 17:57:04.964704   19117 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 17:57:04.964736   19117 cache_images.go:84] Images are preloaded, skipping loading
	I0531 17:57:04.964743   19117 kubeadm.go:928] updating node { 192.168.39.124 8443 v1.30.1 crio true true} ...
	I0531 17:57:04.964864   19117 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-705710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:addons-705710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 17:57:04.964931   19117 ssh_runner.go:195] Run: crio config
	I0531 17:57:05.018644   19117 cni.go:84] Creating CNI manager for ""
	I0531 17:57:05.018662   19117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 17:57:05.018669   19117 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 17:57:05.018688   19117 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.124 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-705710 NodeName:addons-705710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 17:57:05.018799   19117 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-705710"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 17:57:05.018855   19117 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 17:57:05.031285   19117 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 17:57:05.031344   19117 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 17:57:05.043127   19117 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0531 17:57:05.062078   19117 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 17:57:05.080324   19117 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0531 17:57:05.099030   19117 ssh_runner.go:195] Run: grep 192.168.39.124	control-plane.minikube.internal$ /etc/hosts
	I0531 17:57:05.103233   19117 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 17:57:05.117375   19117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:57:05.250247   19117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 17:57:05.267592   19117 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710 for IP: 192.168.39.124
	I0531 17:57:05.267618   19117 certs.go:194] generating shared ca certs ...
	I0531 17:57:05.267635   19117 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:05.267781   19117 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 17:57:05.665621   19117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt ...
	I0531 17:57:05.665653   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt: {Name:mkc585e05b95c577cb818d7a0f36be8f8a9e5f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:05.665809   19117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key ...
	I0531 17:57:05.665819   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key: {Name:mk19c5b325fc46f7dd3fd6492ea57435eb703c0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:05.665891   19117 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 17:57:05.891005   19117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt ...
	I0531 17:57:05.891031   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt: {Name:mk3339846b1495658ffdc3af93fff5fe8569b926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:05.891185   19117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key ...
	I0531 17:57:05.891196   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key: {Name:mk7f41f471f9950ebb136f5eb5648956b983fc11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:05.891262   19117 certs.go:256] generating profile certs ...
	I0531 17:57:05.891314   19117 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/client.key
	I0531 17:57:05.891330   19117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/client.crt with IP's: []
	I0531 17:57:06.036827   19117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/client.crt ...
	I0531 17:57:06.036856   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/client.crt: {Name:mk34998950d184c09f27fc3ed8c8af025bfe48cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:06.037011   19117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/client.key ...
	I0531 17:57:06.037022   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/client.key: {Name:mkc4271823dce3f9239ee04881b5d0ce34d0d78a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:06.037093   19117 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.key.a9b18e92
	I0531 17:57:06.037112   19117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.crt.a9b18e92 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.124]
	I0531 17:57:06.257674   19117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.crt.a9b18e92 ...
	I0531 17:57:06.257700   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.crt.a9b18e92: {Name:mkb8dca0a4227369211863155760ec125f2e06d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:06.257846   19117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.key.a9b18e92 ...
	I0531 17:57:06.257859   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.key.a9b18e92: {Name:mk8c791c87c2e5d6e8701081bfd0ba5e60956112 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:06.257924   19117 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.crt.a9b18e92 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.crt
	I0531 17:57:06.257995   19117 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.key.a9b18e92 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.key
	I0531 17:57:06.258053   19117 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/proxy-client.key
	I0531 17:57:06.258070   19117 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/proxy-client.crt with IP's: []
	I0531 17:57:06.340304   19117 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/proxy-client.crt ...
	I0531 17:57:06.340330   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/proxy-client.crt: {Name:mk270f58052ce102786967a28add7b4c256bfd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:06.340472   19117 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/proxy-client.key ...
	I0531 17:57:06.340483   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/proxy-client.key: {Name:mkab59f66ac9b007cd601f8fc68a600d4ccad20b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:06.340646   19117 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 17:57:06.340680   19117 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 17:57:06.340705   19117 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 17:57:06.340727   19117 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 17:57:06.341290   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 17:57:06.373337   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 17:57:06.401930   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 17:57:06.429434   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 17:57:06.456854   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0531 17:57:06.485215   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 17:57:06.509901   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 17:57:06.533771   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/addons-705710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 17:57:06.556743   19117 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 17:57:06.579977   19117 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 17:57:06.596049   19117 ssh_runner.go:195] Run: openssl version
	I0531 17:57:06.601852   19117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 17:57:06.611965   19117 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:57:06.616202   19117 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:57:06.616260   19117 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 17:57:06.621961   19117 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 17:57:06.631858   19117 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 17:57:06.635948   19117 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 17:57:06.635996   19117 kubeadm.go:391] StartCluster: {Name:addons-705710 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 C
lusterName:addons-705710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 17:57:06.636079   19117 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 17:57:06.636120   19117 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 17:57:06.670683   19117 cri.go:89] found id: ""
	I0531 17:57:06.670781   19117 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 17:57:06.680595   19117 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 17:57:06.690474   19117 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 17:57:06.700048   19117 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 17:57:06.700070   19117 kubeadm.go:156] found existing configuration files:
	
	I0531 17:57:06.700122   19117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 17:57:06.709630   19117 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0531 17:57:06.709688   19117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0531 17:57:06.719055   19117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 17:57:06.728241   19117 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0531 17:57:06.728299   19117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0531 17:57:06.737432   19117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 17:57:06.746245   19117 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0531 17:57:06.746293   19117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 17:57:06.755515   19117 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 17:57:06.764154   19117 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0531 17:57:06.764191   19117 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 17:57:06.773173   19117 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0531 17:57:06.834598   19117 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0531 17:57:06.834651   19117 kubeadm.go:309] [preflight] Running pre-flight checks
	I0531 17:57:06.994960   19117 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 17:57:06.995084   19117 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 17:57:06.995206   19117 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 17:57:07.203462   19117 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 17:57:07.386410   19117 out.go:204]   - Generating certificates and keys ...
	I0531 17:57:07.386530   19117 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0531 17:57:07.386623   19117 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0531 17:57:07.386712   19117 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 17:57:07.514674   19117 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0531 17:57:07.629031   19117 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0531 17:57:07.792467   19117 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0531 17:57:08.132715   19117 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0531 17:57:08.132835   19117 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-705710 localhost] and IPs [192.168.39.124 127.0.0.1 ::1]
	I0531 17:57:08.299005   19117 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0531 17:57:08.299225   19117 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-705710 localhost] and IPs [192.168.39.124 127.0.0.1 ::1]
	I0531 17:57:08.386579   19117 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 17:57:08.591665   19117 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 17:57:08.686069   19117 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0531 17:57:08.686245   19117 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 17:57:08.866411   19117 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 17:57:09.364502   19117 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0531 17:57:09.467519   19117 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 17:57:09.822527   19117 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 17:57:09.899140   19117 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 17:57:09.899715   19117 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 17:57:09.902020   19117 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 17:57:09.903783   19117 out.go:204]   - Booting up control plane ...
	I0531 17:57:09.903875   19117 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 17:57:09.903962   19117 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 17:57:09.904086   19117 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 17:57:09.919568   19117 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 17:57:09.920423   19117 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 17:57:09.920504   19117 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0531 17:57:10.064487   19117 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0531 17:57:10.064587   19117 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0531 17:57:11.065839   19117 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.001896114s
	I0531 17:57:11.065965   19117 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0531 17:57:15.565707   19117 kubeadm.go:309] [api-check] The API server is healthy after 4.502092254s
	I0531 17:57:15.583365   19117 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 17:57:15.600805   19117 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 17:57:15.633443   19117 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 17:57:15.633693   19117 kubeadm.go:309] [mark-control-plane] Marking the node addons-705710 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 17:57:15.645845   19117 kubeadm.go:309] [bootstrap-token] Using token: pi9jtu.as5ddihpkf6kf5xm
	I0531 17:57:15.647186   19117 out.go:204]   - Configuring RBAC rules ...
	I0531 17:57:15.647318   19117 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 17:57:15.654865   19117 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 17:57:15.661029   19117 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 17:57:15.664197   19117 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 17:57:15.667307   19117 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 17:57:15.670283   19117 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 17:57:15.976126   19117 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 17:57:16.403231   19117 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0531 17:57:16.974703   19117 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0531 17:57:16.975727   19117 kubeadm.go:309] 
	I0531 17:57:16.975819   19117 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0531 17:57:16.975832   19117 kubeadm.go:309] 
	I0531 17:57:16.975946   19117 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0531 17:57:16.975964   19117 kubeadm.go:309] 
	I0531 17:57:16.976038   19117 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0531 17:57:16.976118   19117 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 17:57:16.976203   19117 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 17:57:16.976221   19117 kubeadm.go:309] 
	I0531 17:57:16.976290   19117 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0531 17:57:16.976301   19117 kubeadm.go:309] 
	I0531 17:57:16.976370   19117 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 17:57:16.976387   19117 kubeadm.go:309] 
	I0531 17:57:16.976495   19117 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0531 17:57:16.976611   19117 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 17:57:16.976700   19117 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 17:57:16.976716   19117 kubeadm.go:309] 
	I0531 17:57:16.976932   19117 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 17:57:16.977079   19117 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0531 17:57:16.977090   19117 kubeadm.go:309] 
	I0531 17:57:16.977210   19117 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token pi9jtu.as5ddihpkf6kf5xm \
	I0531 17:57:16.977341   19117 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 \
	I0531 17:57:16.977368   19117 kubeadm.go:309] 	--control-plane 
	I0531 17:57:16.977374   19117 kubeadm.go:309] 
	I0531 17:57:16.977526   19117 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0531 17:57:16.977542   19117 kubeadm.go:309] 
	I0531 17:57:16.977671   19117 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token pi9jtu.as5ddihpkf6kf5xm \
	I0531 17:57:16.977804   19117 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 
	I0531 17:57:16.978934   19117 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 17:57:16.978966   19117 cni.go:84] Creating CNI manager for ""
	I0531 17:57:16.978977   19117 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 17:57:16.980848   19117 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0531 17:57:16.982148   19117 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0531 17:57:16.993345   19117 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0531 17:57:17.014502   19117 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 17:57:17.014577   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:17.014595   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-705710 minikube.k8s.io/updated_at=2024_05_31T17_57_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404 minikube.k8s.io/name=addons-705710 minikube.k8s.io/primary=true
	I0531 17:57:17.055659   19117 ops.go:34] apiserver oom_adj: -16
	I0531 17:57:17.147408   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:17.648434   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:18.147590   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:18.647717   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:19.148431   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:19.647559   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:20.148219   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:20.648439   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:21.147465   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:21.648048   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:22.147488   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:22.648443   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:23.147551   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:23.647501   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:24.147956   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:24.648052   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:25.147806   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:25.647606   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:26.147711   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:26.647511   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:27.148529   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:27.648077   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:28.147989   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:28.647441   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:29.148386   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:29.648239   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:30.148266   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:30.648288   19117 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 17:57:30.738911   19117 kubeadm.go:1107] duration metric: took 13.724398076s to wait for elevateKubeSystemPrivileges
	W0531 17:57:30.738959   19117 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0531 17:57:30.738969   19117 kubeadm.go:393] duration metric: took 24.102978255s to StartCluster
	I0531 17:57:30.739000   19117 settings.go:142] acquiring lock: {Name:mkf1adeb57ad3d510bae126cde364cbda2de4f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:30.739146   19117 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 17:57:30.739659   19117 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/kubeconfig: {Name:mk4025a54f7d507177124d778f591e88161dcba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:57:30.739873   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 17:57:30.739896   19117 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.124 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 17:57:30.742097   19117 out.go:177] * Verifying Kubernetes components...
	I0531 17:57:30.739958   19117 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0531 17:57:30.740117   19117 config.go:182] Loaded profile config "addons-705710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 17:57:30.743384   19117 addons.go:69] Setting cloud-spanner=true in profile "addons-705710"
	I0531 17:57:30.743394   19117 addons.go:69] Setting yakd=true in profile "addons-705710"
	I0531 17:57:30.743413   19117 addons.go:234] Setting addon yakd=true in "addons-705710"
	I0531 17:57:30.743411   19117 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 17:57:30.743427   19117 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-705710"
	I0531 17:57:30.743435   19117 addons.go:69] Setting helm-tiller=true in profile "addons-705710"
	I0531 17:57:30.743452   19117 addons.go:69] Setting registry=true in profile "addons-705710"
	I0531 17:57:30.743463   19117 addons.go:69] Setting volcano=true in profile "addons-705710"
	I0531 17:57:30.743470   19117 addons.go:69] Setting default-storageclass=true in profile "addons-705710"
	I0531 17:57:30.743472   19117 addons.go:69] Setting metrics-server=true in profile "addons-705710"
	I0531 17:57:30.743477   19117 addons.go:234] Setting addon registry=true in "addons-705710"
	I0531 17:57:30.743417   19117 addons.go:69] Setting inspektor-gadget=true in profile "addons-705710"
	I0531 17:57:30.743485   19117 addons.go:69] Setting ingress-dns=true in profile "addons-705710"
	I0531 17:57:30.743490   19117 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-705710"
	I0531 17:57:30.743493   19117 addons.go:234] Setting addon metrics-server=true in "addons-705710"
	I0531 17:57:30.743494   19117 addons.go:69] Setting volumesnapshots=true in profile "addons-705710"
	I0531 17:57:30.743497   19117 addons.go:69] Setting ingress=true in profile "addons-705710"
	I0531 17:57:30.743510   19117 addons.go:69] Setting gcp-auth=true in profile "addons-705710"
	I0531 17:57:30.743510   19117 addons.go:234] Setting addon inspektor-gadget=true in "addons-705710"
	I0531 17:57:30.743511   19117 addons.go:234] Setting addon volumesnapshots=true in "addons-705710"
	I0531 17:57:30.743515   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.743523   19117 addons.go:234] Setting addon ingress=true in "addons-705710"
	I0531 17:57:30.743525   19117 mustload.go:65] Loading cluster: addons-705710
	I0531 17:57:30.743530   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.743537   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.743560   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.743477   19117 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-705710"
	I0531 17:57:30.743454   19117 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-705710"
	I0531 17:57:30.743605   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.743611   19117 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-705710"
	I0531 17:57:30.743706   19117 config.go:182] Loaded profile config "addons-705710": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 17:57:30.743932   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.743942   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.743947   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.743952   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.743958   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.743504   19117 addons.go:234] Setting addon ingress-dns=true in "addons-705710"
	I0531 17:57:30.743966   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.743966   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.743439   19117 addons.go:69] Setting storage-provisioner=true in profile "addons-705710"
	I0531 17:57:30.743985   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.743993   19117 addons.go:234] Setting addon storage-provisioner=true in "addons-705710"
	I0531 17:57:30.743997   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.744013   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.743959   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.743932   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.743445   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.744096   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.743465   19117 addons.go:234] Setting addon helm-tiller=true in "addons-705710"
	I0531 17:57:30.743505   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.744121   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.744098   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.743970   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.743462   19117 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-705710"
	I0531 17:57:30.743422   19117 addons.go:234] Setting addon cloud-spanner=true in "addons-705710"
	I0531 17:57:30.744303   19117 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-705710"
	I0531 17:57:30.744322   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.744324   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.744338   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.744388   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.744393   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.744421   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.743485   19117 addons.go:234] Setting addon volcano=true in "addons-705710"
	I0531 17:57:30.744613   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.744639   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.744656   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.744676   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.744678   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.744697   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.744701   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.744705   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.744715   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.744716   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.744850   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.745074   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.745261   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.745295   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.745450   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.745490   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.766028   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44465
	I0531 17:57:30.766437   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.766931   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.766950   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.767282   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.767828   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.767867   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.770362   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I0531 17:57:30.770865   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42605
	I0531 17:57:30.771483   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.771488   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.772019   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.772036   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.772156   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.772166   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.772547   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.773107   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.773144   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.773600   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.774123   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.774161   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.774600   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32797
	I0531 17:57:30.779533   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33365
	I0531 17:57:30.779994   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.780359   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.780807   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.780826   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.781187   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.781241   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33115
	I0531 17:57:30.781384   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43543
	I0531 17:57:30.782055   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.782094   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.782654   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.782730   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.782959   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0531 17:57:30.783274   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.783294   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.783649   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.783667   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.783715   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.784437   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.784470   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.789305   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.789326   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.789382   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38435
	I0531 17:57:30.789534   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.789587   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.790718   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.790753   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.791532   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.791595   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.792025   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.792152   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.792164   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.792442   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.792594   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.797116   19117 addons.go:234] Setting addon default-storageclass=true in "addons-705710"
	I0531 17:57:30.797156   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.797510   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.797529   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.798602   19117 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-705710"
	I0531 17:57:30.798635   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.798990   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.799010   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.803054   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.803076   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.803554   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.803619   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41869
	I0531 17:57:30.804572   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.804791   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.805028   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.805043   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.805473   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.805671   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.807359   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44225
	I0531 17:57:30.807709   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.810143   19117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0531 17:57:30.808193   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.809757   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:30.811793   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.811826   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.813285   19117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0531 17:57:30.812722   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.814531   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.814573   19117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0531 17:57:30.815923   19117 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 17:57:30.815940   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0531 17:57:30.815003   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.815958   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.815257   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35929
	I0531 17:57:30.816005   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44181
	I0531 17:57:30.816616   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.816660   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.816863   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.817227   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.817390   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.817410   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.817834   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.817855   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.817908   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.818215   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.818375   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.818678   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46217
	I0531 17:57:30.819497   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.819535   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.819746   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.819826   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.819847   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.819862   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.820116   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.820302   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.820545   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.820560   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.820608   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.820763   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.823376   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42629
	I0531 17:57:30.823488   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.825805   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0531 17:57:30.824312   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.824612   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.827775   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41791
	I0531 17:57:30.828764   19117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0531 17:57:30.828779   19117 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0531 17:57:30.828801   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.829863   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.830194   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.830236   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.830416   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.830435   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.830731   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.830858   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.830872   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.831029   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.831444   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32929
	I0531 17:57:30.831920   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.832487   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.832511   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.832689   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.832728   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.834258   19117 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 17:57:30.833242   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.833281   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.833447   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.834537   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45367
	I0531 17:57:30.835857   19117 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:57:30.835876   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 17:57:30.835883   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.835896   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.836091   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.836249   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.836368   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.836760   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.836775   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.836835   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I0531 17:57:30.837365   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.837429   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.838314   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.838351   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.838606   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.838618   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.838962   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.838966   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.839022   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.839115   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.839964   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.839984   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.840250   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.840265   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.840598   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.840656   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.840931   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:30.840944   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:30.841012   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.842865   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32979
	I0531 17:57:30.842994   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:30.843020   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:30.843026   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:30.843034   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:30.843040   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:30.843123   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.843208   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39485
	I0531 17:57:30.843271   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:30.843296   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:30.843303   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	W0531 17:57:30.843373   19117 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0531 17:57:30.843495   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.843563   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.843899   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.843912   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45005
	I0531 17:57:30.843916   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.844516   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.845085   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.845112   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.845165   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.845240   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.845289   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.845922   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.845965   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.846519   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.846538   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.846753   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.846767   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.847197   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.847399   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.847588   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.848109   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.849127   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.850893   19117 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0531 17:57:30.852200   19117 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0531 17:57:30.852215   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0531 17:57:30.852232   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.853894   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.855740   19117 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0531 17:57:30.854702   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44437
	I0531 17:57:30.856458   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.857315   19117 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0531 17:57:30.857328   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.857334   19117 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0531 17:57:30.857354   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.857355   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.856487   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I0531 17:57:30.857083   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.858018   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.858533   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.858601   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.858781   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.859103   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.859121   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.859474   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.859700   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.859947   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.860580   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.860598   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.861101   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.861366   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.862013   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.863919   19117 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0531 17:57:30.862721   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43993
	I0531 17:57:30.863358   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0531 17:57:30.863821   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.864563   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.864701   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41399
	I0531 17:57:30.865294   19117 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0531 17:57:30.865308   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0531 17:57:30.864727   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.865322   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.865094   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0531 17:57:30.865428   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.865451   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.867246   19117 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0531 17:57:30.866070   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.866534   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45997
	I0531 17:57:30.866563   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.866773   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.866812   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.866842   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.868461   19117 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0531 17:57:30.868476   19117 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0531 17:57:30.868491   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.869162   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.869182   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.869201   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.869323   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.869343   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.869561   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.869624   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.869743   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.869754   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.869803   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.870033   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.871645   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.871823   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.871839   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.871838   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.871826   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.871885   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.871885   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.871904   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.871918   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.872040   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.872211   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.872261   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.872302   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.872534   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.872588   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.872641   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.872671   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.872893   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.872908   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.872930   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.873630   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.873663   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.873876   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:30.873909   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:30.874094   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.874278   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.874436   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40939
	I0531 17:57:30.874469   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.874642   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.875323   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.875353   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.875324   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.877291   19117 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.28.1
	I0531 17:57:30.875791   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.877579   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33577
	I0531 17:57:30.878521   19117 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0531 17:57:30.878537   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.879681   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34783
	I0531 17:57:30.879697   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0531 17:57:30.879890   19117 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0531 17:57:30.880048   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.880104   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.881061   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0531 17:57:30.880127   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.881106   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.881295   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.881439   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.882502   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.882542   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0531 17:57:30.883730   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0531 17:57:30.884798   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0531 17:57:30.882997   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.883247   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.884546   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.885744   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.887109   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0531 17:57:30.885929   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.886127   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.886161   19117 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 17:57:30.886309   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.886329   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.887408   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.888224   19117 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 17:57:30.888244   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0531 17:57:30.888264   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.888247   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.888443   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.888443   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.889496   19117 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0531 17:57:30.890597   19117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0531 17:57:30.890623   19117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0531 17:57:30.890639   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.889759   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.890116   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.890973   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.892416   19117 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0531 17:57:30.892282   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.893649   19117 out.go:177]   - Using image docker.io/busybox:stable
	I0531 17:57:30.894855   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
	I0531 17:57:30.894963   19117 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0531 17:57:30.894982   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0531 17:57:30.894998   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.896119   19117 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0531 17:57:30.893701   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.894231   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.894276   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.894730   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.895272   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.898304   19117 out.go:177]   - Using image docker.io/registry:2.8.3
	I0531 17:57:30.897294   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.897338   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.897456   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.897464   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.897489   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.897767   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.897970   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.899350   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.899380   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.899392   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.899396   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.899409   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.899425   19117 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0531 17:57:30.899436   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0531 17:57:30.899449   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.899509   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.899562   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.899788   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.899812   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.899870   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.899952   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.900140   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.900417   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.900598   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.902377   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.902546   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.904186   19117 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0531 17:57:30.902884   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.903021   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.905655   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.905700   19117 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 17:57:30.905708   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0531 17:57:30.905718   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.906270   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.906380   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
	I0531 17:57:30.906604   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.906736   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:30.906774   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.907334   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:30.907358   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:30.907903   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:30.908070   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:30.909022   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.909402   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.909430   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.909598   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.909760   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.909927   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.910065   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:30.910708   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:30.912489   19117 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0531 17:57:30.913992   19117 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0531 17:57:30.914014   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0531 17:57:30.914026   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:30.916734   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.917133   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:30.917155   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:30.917327   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:30.917511   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:30.917640   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:30.917743   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	W0531 17:57:30.930432   19117 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:36790->192.168.39.124:22: read: connection reset by peer
	I0531 17:57:30.930462   19117 retry.go:31] will retry after 146.699972ms: ssh: handshake failed: read tcp 192.168.39.1:36790->192.168.39.124:22: read: connection reset by peer
	I0531 17:57:31.270403   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0531 17:57:31.272094   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0531 17:57:31.286947   19117 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0531 17:57:31.286964   19117 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0531 17:57:31.323078   19117 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0531 17:57:31.323099   19117 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0531 17:57:31.400193   19117 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0531 17:57:31.400215   19117 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0531 17:57:31.436356   19117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0531 17:57:31.436376   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0531 17:57:31.439852   19117 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0531 17:57:31.439868   19117 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0531 17:57:31.440415   19117 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0531 17:57:31.440433   19117 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0531 17:57:31.461117   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0531 17:57:31.469716   19117 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0531 17:57:31.469731   19117 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0531 17:57:31.492818   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0531 17:57:31.511427   19117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0531 17:57:31.511446   19117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0531 17:57:31.519290   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 17:57:31.522018   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 17:57:31.525328   19117 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0531 17:57:31.525344   19117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0531 17:57:31.533954   19117 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0531 17:57:31.533971   19117 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0531 17:57:31.565492   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0531 17:57:31.590506   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0531 17:57:31.658956   19117 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0531 17:57:31.658980   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0531 17:57:31.671923   19117 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0531 17:57:31.671944   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0531 17:57:31.719408   19117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0531 17:57:31.719430   19117 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0531 17:57:31.755573   19117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0531 17:57:31.755601   19117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0531 17:57:31.764492   19117 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0531 17:57:31.764511   19117 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0531 17:57:31.765869   19117 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.02243084s)
	I0531 17:57:31.765884   19117 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.025980142s)
	I0531 17:57:31.765928   19117 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 17:57:31.766016   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 17:57:31.769461   19117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0531 17:57:31.769475   19117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0531 17:57:31.860807   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0531 17:57:31.989672   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0531 17:57:31.999411   19117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0531 17:57:31.999438   19117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0531 17:57:32.038431   19117 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 17:57:32.038454   19117 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0531 17:57:32.038471   19117 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0531 17:57:32.038489   19117 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0531 17:57:32.102991   19117 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0531 17:57:32.103023   19117 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0531 17:57:32.215319   19117 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0531 17:57:32.215340   19117 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0531 17:57:32.325250   19117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0531 17:57:32.325274   19117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0531 17:57:32.341313   19117 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0531 17:57:32.341334   19117 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0531 17:57:32.354713   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0531 17:57:32.410617   19117 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 17:57:32.410641   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0531 17:57:32.545203   19117 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0531 17:57:32.545224   19117 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0531 17:57:32.570618   19117 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0531 17:57:32.570639   19117 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0531 17:57:32.649832   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 17:57:32.759326   19117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0531 17:57:32.759349   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0531 17:57:32.908816   19117 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0531 17:57:32.908839   19117 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0531 17:57:33.061962   19117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0531 17:57:33.061988   19117 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0531 17:57:33.216940   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.946499259s)
	I0531 17:57:33.217013   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:33.217031   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:33.217403   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:33.217419   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:33.217433   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:33.217450   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:33.217461   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:33.217786   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:33.217805   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:33.327756   19117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0531 17:57:33.327779   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0531 17:57:33.370953   19117 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0531 17:57:33.370974   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0531 17:57:33.611516   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0531 17:57:33.633222   19117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0531 17:57:33.633243   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0531 17:57:33.856714   19117 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0531 17:57:33.856739   19117 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0531 17:57:34.100780   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0531 17:57:37.877613   19117 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0531 17:57:37.877653   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:37.880649   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:37.881067   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:37.881091   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:37.881285   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:37.881493   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:37.881694   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:37.881840   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:38.211563   19117 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0531 17:57:38.345639   19117 addons.go:234] Setting addon gcp-auth=true in "addons-705710"
	I0531 17:57:38.345688   19117 host.go:66] Checking if "addons-705710" exists ...
	I0531 17:57:38.346026   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:38.346065   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:38.361908   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0531 17:57:38.362350   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:38.362865   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:38.362890   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:38.363224   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:38.363729   19117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 17:57:38.363757   19117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 17:57:38.379132   19117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0531 17:57:38.379564   19117 main.go:141] libmachine: () Calling .GetVersion
	I0531 17:57:38.379986   19117 main.go:141] libmachine: Using API Version  1
	I0531 17:57:38.380016   19117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 17:57:38.380337   19117 main.go:141] libmachine: () Calling .GetMachineName
	I0531 17:57:38.380540   19117 main.go:141] libmachine: (addons-705710) Calling .GetState
	I0531 17:57:38.381990   19117 main.go:141] libmachine: (addons-705710) Calling .DriverName
	I0531 17:57:38.382189   19117 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0531 17:57:38.382212   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHHostname
	I0531 17:57:38.385029   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:38.385484   19117 main.go:141] libmachine: (addons-705710) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3f:a6:a7", ip: ""} in network mk-addons-705710: {Iface:virbr1 ExpiryTime:2024-05-31 18:56:48 +0000 UTC Type:0 Mac:52:54:00:3f:a6:a7 Iaid: IPaddr:192.168.39.124 Prefix:24 Hostname:addons-705710 Clientid:01:52:54:00:3f:a6:a7}
	I0531 17:57:38.385509   19117 main.go:141] libmachine: (addons-705710) DBG | domain addons-705710 has defined IP address 192.168.39.124 and MAC address 52:54:00:3f:a6:a7 in network mk-addons-705710
	I0531 17:57:38.385649   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHPort
	I0531 17:57:38.385826   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHKeyPath
	I0531 17:57:38.385998   19117 main.go:141] libmachine: (addons-705710) Calling .GetSSHUsername
	I0531 17:57:38.386144   19117 sshutil.go:53] new ssh client: &{IP:192.168.39.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/addons-705710/id_rsa Username:docker}
	I0531 17:57:39.432891   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.160763619s)
	I0531 17:57:39.432946   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.432971   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.432980   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.971817296s)
	I0531 17:57:39.433021   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.433036   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.433039   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.940198622s)
	I0531 17:57:39.433062   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.913750694s)
	I0531 17:57:39.433067   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.433078   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.433081   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.433089   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.433138   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.911100942s)
	I0531 17:57:39.433155   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.433163   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.433205   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.867681991s)
	I0531 17:57:39.433238   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.433249   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.433263   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.842736064s)
	I0531 17:57:39.433278   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.433286   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.433314   19117 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.667372493s)
	I0531 17:57:39.433341   19117 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.667308s)
	I0531 17:57:39.433356   19117 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0531 17:57:39.433932   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.573095977s)
	I0531 17:57:39.433955   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.433965   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.434193   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.444493339s)
	I0531 17:57:39.434212   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.434221   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.434331   19117 node_ready.go:35] waiting up to 6m0s for node "addons-705710" to be "Ready" ...
	I0531 17:57:39.434594   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.079857335s)
	I0531 17:57:39.434619   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.434631   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.434746   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.784883556s)
	W0531 17:57:39.434768   19117 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0531 17:57:39.434796   19117 retry.go:31] will retry after 130.298168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0531 17:57:39.434866   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.823321998s)
	I0531 17:57:39.434883   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.434890   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.435543   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.435564   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.435590   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.435598   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.435606   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.435611   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.435693   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.435714   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.435720   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.435727   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.435733   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.435771   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.435843   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.435863   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.435871   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.435878   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.435979   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436012   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436028   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436036   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.436042   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.436090   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436104   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436123   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436129   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436136   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.436142   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.436167   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436191   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436193   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436198   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436201   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436207   19117 addons.go:475] Verifying addon metrics-server=true in "addons-705710"
	I0531 17:57:39.436209   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.436224   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436227   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.436243   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436249   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436256   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.436261   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.436266   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436286   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436291   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436292   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436302   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.436308   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436309   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.436314   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436321   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.436326   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.436344   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436356   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.436360   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436367   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436373   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436379   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436385   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.436391   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.436761   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.436773   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.436998   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.437015   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.437185   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.437216   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.437224   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.437286   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.437294   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.437226   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.438633   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.438646   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.439113   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.439135   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.439143   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.439150   19117 addons.go:475] Verifying addon ingress=true in "addons-705710"
	I0531 17:57:39.440711   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.440726   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.440712   19117 out.go:177] * Verifying ingress addon...
	I0531 17:57:39.440250   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.440276   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.442226   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.440673   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.439218   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.442862   19117 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0531 17:57:39.443658   19117 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-705710 service yakd-dashboard -n yakd-dashboard
	
	I0531 17:57:39.443692   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.444772   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.444782   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.445091   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.445106   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.445115   19117 addons.go:475] Verifying addon registry=true in "addons-705710"
	I0531 17:57:39.445123   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.446189   19117 out.go:177] * Verifying registry addon...
	I0531 17:57:39.448452   19117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0531 17:57:39.449336   19117 node_ready.go:49] node "addons-705710" has status "Ready":"True"
	I0531 17:57:39.449359   19117 node_ready.go:38] duration metric: took 15.00594ms for node "addons-705710" to be "Ready" ...
	I0531 17:57:39.449370   19117 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 17:57:39.465190   19117 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0531 17:57:39.465219   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:39.473185   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.473201   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.473541   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:39.473823   19117 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0531 17:57:39.473850   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:39.474714   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.474727   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	W0531 17:57:39.474789   19117 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0531 17:57:39.480901   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:39.480919   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:39.481151   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:39.481169   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:39.492297   19117 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cgxbq" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.525808   19117 pod_ready.go:92] pod "coredns-7db6d8ff4d-cgxbq" in "kube-system" namespace has status "Ready":"True"
	I0531 17:57:39.525833   19117 pod_ready.go:81] duration metric: took 33.515306ms for pod "coredns-7db6d8ff4d-cgxbq" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.525845   19117 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zj29t" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.565683   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0531 17:57:39.580446   19117 pod_ready.go:92] pod "coredns-7db6d8ff4d-zj29t" in "kube-system" namespace has status "Ready":"True"
	I0531 17:57:39.580473   19117 pod_ready.go:81] duration metric: took 54.620064ms for pod "coredns-7db6d8ff4d-zj29t" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.580490   19117 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-705710" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.591383   19117 pod_ready.go:92] pod "etcd-addons-705710" in "kube-system" namespace has status "Ready":"True"
	I0531 17:57:39.591411   19117 pod_ready.go:81] duration metric: took 10.912564ms for pod "etcd-addons-705710" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.591423   19117 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-705710" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.608543   19117 pod_ready.go:92] pod "kube-apiserver-addons-705710" in "kube-system" namespace has status "Ready":"True"
	I0531 17:57:39.608565   19117 pod_ready.go:81] duration metric: took 17.134353ms for pod "kube-apiserver-addons-705710" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.608578   19117 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-705710" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.838435   19117 pod_ready.go:92] pod "kube-controller-manager-addons-705710" in "kube-system" namespace has status "Ready":"True"
	I0531 17:57:39.838459   19117 pod_ready.go:81] duration metric: took 229.873403ms for pod "kube-controller-manager-addons-705710" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.838474   19117 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bkq22" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:39.937634   19117 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-705710" context rescaled to 1 replicas
	I0531 17:57:39.949114   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:39.953363   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:40.244553   19117 pod_ready.go:92] pod "kube-proxy-bkq22" in "kube-system" namespace has status "Ready":"True"
	I0531 17:57:40.244582   19117 pod_ready.go:81] duration metric: took 406.099403ms for pod "kube-proxy-bkq22" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:40.244595   19117 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-705710" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:40.478650   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:40.478947   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:40.645046   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.54419539s)
	I0531 17:57:40.645095   19117 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.262884163s)
	I0531 17:57:40.646482   19117 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0531 17:57:40.645098   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:40.647676   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:40.648825   19117 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0531 17:57:40.647936   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:40.647985   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:40.648861   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:40.648871   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:40.648879   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:40.650000   19117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0531 17:57:40.649108   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:40.649152   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:40.650043   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:40.650053   19117 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-705710"
	I0531 17:57:40.650021   19117 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0531 17:57:40.651243   19117 out.go:177] * Verifying csi-hostpath-driver addon...
	I0531 17:57:40.652904   19117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0531 17:57:40.702707   19117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0531 17:57:40.702740   19117 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0531 17:57:40.744981   19117 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0531 17:57:40.745008   19117 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0531 17:57:40.763182   19117 pod_ready.go:92] pod "kube-scheduler-addons-705710" in "kube-system" namespace has status "Ready":"True"
	I0531 17:57:40.763207   19117 pod_ready.go:81] duration metric: took 518.604162ms for pod "kube-scheduler-addons-705710" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:40.763228   19117 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace to be "Ready" ...
	I0531 17:57:40.763438   19117 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0531 17:57:40.763452   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:40.784734   19117 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0531 17:57:41.022852   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:41.023064   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:41.160653   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:41.448345   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:41.456821   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:41.658586   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:41.948423   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:41.952647   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:42.004973   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.439226645s)
	I0531 17:57:42.005030   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:42.005047   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:42.005312   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:42.005357   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:42.005370   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:42.005383   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:42.005392   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:42.005690   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:42.005704   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:42.158089   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:42.451072   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:42.459307   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:42.659025   19117 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.874253485s)
	I0531 17:57:42.659069   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:42.659085   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:42.659373   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:42.659416   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:42.659429   19117 main.go:141] libmachine: Making call to close driver server
	I0531 17:57:42.659437   19117 main.go:141] libmachine: (addons-705710) Calling .Close
	I0531 17:57:42.659442   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:42.659642   19117 main.go:141] libmachine: Successfully made call to close driver server
	I0531 17:57:42.659657   19117 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 17:57:42.659659   19117 main.go:141] libmachine: (addons-705710) DBG | Closing plugin on server side
	I0531 17:57:42.661405   19117 addons.go:475] Verifying addon gcp-auth=true in "addons-705710"
	I0531 17:57:42.662759   19117 out.go:177] * Verifying gcp-auth addon...
	I0531 17:57:42.664908   19117 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0531 17:57:42.674011   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:42.700155   19117 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0531 17:57:42.700174   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:42.769769   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:57:42.951884   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:42.953986   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:43.158085   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:43.168202   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:43.447915   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:43.452999   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:43.658705   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:43.668551   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:43.949164   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:43.953502   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:44.158793   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:44.168311   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:44.447652   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:44.455976   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:44.662067   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:44.670551   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:44.949095   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:44.955539   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:45.158590   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:45.168823   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:45.269624   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:57:45.448244   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:45.452097   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:45.658937   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:45.668593   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:45.948629   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:45.952643   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:46.159237   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:46.168592   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:46.448121   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:46.452261   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:46.659254   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:46.668837   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:46.948346   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:46.953224   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:47.158707   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:47.168185   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:47.274026   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:57:47.487942   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:47.488038   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:47.658604   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:47.668419   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:47.949167   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:47.953148   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:48.158725   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:48.168176   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:48.448451   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:48.452460   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:48.658630   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:48.668474   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:48.969253   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:48.969769   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:49.158350   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:49.168448   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:49.453268   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:49.453936   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:49.658934   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:49.668721   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:49.769161   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:57:49.947951   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:49.952787   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:50.158427   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:50.168921   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:50.447535   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:50.452438   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:50.669236   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:50.671337   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:50.948570   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:50.952333   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:51.159860   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:51.168129   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:51.448345   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:51.453033   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:51.659588   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:51.670327   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:51.771007   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:57:51.948499   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:51.952175   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:52.158877   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:52.167447   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:52.447580   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:52.452544   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:52.658553   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:52.668480   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:52.947646   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:52.953687   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:53.159556   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:53.168708   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:53.447753   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:53.452917   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:53.659470   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:53.668639   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:53.948402   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:53.952762   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:54.159166   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:54.168601   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:54.270041   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:57:54.448403   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:54.452585   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:54.658934   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:54.668754   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:54.947665   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:54.952472   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:55.158214   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:55.168152   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:55.447512   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:55.452584   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:55.658718   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:55.667555   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:55.949587   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:55.952790   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:56.160157   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:56.168043   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:56.271961   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:57:56.448122   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:56.455763   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:56.663513   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:56.668603   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:57.230692   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:57.230772   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:57.233077   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:57.235206   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:57.448706   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:57.453489   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:57.659140   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:57.668401   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:57.949832   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:57.953365   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:58.159044   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:58.168576   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:58.273744   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:57:58.448086   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:58.452622   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:58.658403   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:58.668184   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:58.948480   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:58.952309   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:59.158711   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:59.168479   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:59.448463   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:59.453213   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:57:59.659121   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:57:59.668514   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:57:59.949190   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:57:59.955753   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:00.158984   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:00.172658   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:00.449314   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:00.453891   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:00.659778   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:00.673257   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:00.770221   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:00.948731   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:00.952661   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:01.160084   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:01.168271   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:01.448703   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:01.452702   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:01.660051   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:01.668900   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:01.949163   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:01.955591   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:02.159761   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:02.169918   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:02.448058   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:02.453556   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:02.658088   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:02.668353   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:02.948291   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:02.954416   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:03.159190   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:03.168520   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:03.270427   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:03.733405   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:03.734509   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:03.739406   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:03.739765   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:03.948470   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:03.952406   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:04.158701   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:04.168087   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:04.448589   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:04.452450   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:04.667089   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:04.675933   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:04.948449   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:04.952370   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:05.158136   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:05.168830   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:05.270933   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:05.449206   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:05.455361   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:05.658264   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:05.669072   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:05.948247   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:05.952484   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:06.158168   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:06.168646   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:06.448119   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:06.452155   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:06.657751   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:06.668646   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:06.948509   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:06.956938   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:07.158530   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:07.168564   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:07.447406   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:07.454911   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:07.669575   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:07.671816   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:07.768662   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:07.948338   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:07.952862   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:08.158900   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:08.168284   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:08.448444   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:08.452830   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:08.659402   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:08.668468   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:08.947722   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:08.952843   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:09.159312   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:09.167820   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:09.447790   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:09.453112   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:09.658974   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:09.668157   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:09.772212   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:09.948113   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:09.952995   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:10.161570   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:10.168981   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:10.448931   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:10.453287   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:11.130684   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:11.134580   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:11.135045   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:11.135799   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:11.162726   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:11.170609   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:11.448853   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:11.460147   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:11.662178   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:11.672089   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:11.948158   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:11.956103   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:12.160201   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:12.167524   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:12.269204   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:12.448852   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:12.455771   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:12.659065   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:12.667915   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:12.948555   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:12.952571   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:13.158881   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:13.168350   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:13.447817   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:13.452597   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:13.658142   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:13.668090   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:13.947822   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:13.952936   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:14.158524   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:14.169351   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:14.269570   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:14.448659   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:14.452055   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:14.657997   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:14.668000   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:14.948140   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:14.952765   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:15.162168   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:15.168006   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:15.447526   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:15.452888   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:15.658756   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:15.668115   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:15.947981   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:15.953165   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:16.161493   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:16.168232   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:16.448302   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:16.452198   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:16.657693   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:16.667665   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:16.769797   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:16.947907   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:16.952878   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:17.161062   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:17.168753   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:17.448785   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:17.452892   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:17.659346   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:17.667935   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:17.948314   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:17.952700   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:18.158620   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:18.168555   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:18.447942   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:18.453310   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:18.657898   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:18.668260   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:18.948970   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:18.952897   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:19.159546   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:19.169778   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:19.269340   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:19.448064   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:19.453112   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:19.661417   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:19.669638   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:19.948554   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:19.952612   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:20.158419   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:20.168090   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:20.447711   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:20.452731   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:20.686823   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:20.690713   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:20.951322   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:20.956273   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:21.158966   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:21.168743   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:21.269777   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:21.448698   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:21.452638   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:21.658201   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:21.668611   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:21.947416   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:21.952786   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:22.160764   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:22.168761   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:22.448686   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:22.452085   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:22.658050   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:22.668737   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:22.948558   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:22.952610   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:23.161207   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:23.170232   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:23.270141   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:23.447792   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:23.452628   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:23.658428   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:23.667931   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:23.948554   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:23.951989   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:24.158519   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:24.167939   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:24.448412   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:24.452046   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:24.659704   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:24.668614   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:24.947787   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:24.955149   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:25.159236   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:25.167942   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:25.271626   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:25.449020   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:25.453502   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:25.657786   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:25.667676   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:25.948029   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:25.952552   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:26.159802   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:26.168338   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:26.448053   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:26.453377   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:26.658264   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:26.667997   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:26.951040   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:26.955512   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:27.158830   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:27.168263   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:27.447839   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:27.453342   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:27.657762   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:27.667991   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:27.768962   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:27.949762   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:27.954368   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:28.164927   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:28.168145   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:28.448241   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:28.455378   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:28.658038   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:28.669336   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:28.948909   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:28.952664   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:29.158289   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:29.167881   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:29.448125   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:29.453514   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:29.658502   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:29.668062   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:29.770653   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:29.948894   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:29.952747   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0531 17:58:30.158911   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:30.173549   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:30.449476   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:30.452811   19117 kapi.go:107] duration metric: took 51.004360062s to wait for kubernetes.io/minikube-addons=registry ...
	I0531 17:58:30.658577   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:30.668300   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:30.947645   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:31.157989   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:31.168028   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:31.447712   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:31.658544   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:31.668317   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:31.948046   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:32.158969   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:32.168206   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:32.269591   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:32.448176   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:32.660171   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:32.671666   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:32.947741   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:33.162167   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:33.169268   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:33.449890   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:33.658368   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:33.668801   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:33.948162   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:34.158699   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:34.168380   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:34.269908   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:34.447915   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:34.658018   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:34.668532   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:34.950590   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:35.159509   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:35.168809   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:35.448082   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:35.746118   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:35.747317   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:35.952839   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:36.159913   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:36.168428   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:36.271122   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:36.448049   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:36.658277   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:36.668210   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:36.949354   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:37.158846   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:37.168574   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:37.448657   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:37.659494   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:37.668443   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:37.948807   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:38.158221   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:38.168457   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:38.448192   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:38.658927   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:38.669062   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:38.768909   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:38.948176   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:39.159151   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:39.168158   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:39.448289   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:39.665992   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:39.668492   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:39.949664   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:40.158774   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:40.168536   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:40.448577   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:40.660426   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:40.669728   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:40.769035   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:40.947872   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:41.158305   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:41.168301   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:41.450520   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:41.659374   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:41.668138   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:41.950309   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:42.281181   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:42.283447   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:42.448236   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:42.668748   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:42.696223   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:42.769194   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:42.948469   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:43.158422   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:43.168317   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:43.456683   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:43.657627   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:43.669170   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:43.951601   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:44.158759   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:44.167664   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:44.449858   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:44.658062   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:44.670596   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:44.770128   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:44.954484   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:45.165667   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:45.168431   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:45.449591   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:45.659199   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:45.667444   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:45.948604   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:46.159425   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:46.167916   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:46.449023   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:46.658040   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:46.668386   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:46.770385   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:46.948826   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:47.157815   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:47.167888   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:47.448515   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:47.664789   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:47.671548   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:47.949003   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:48.159242   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:48.168668   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:48.845495   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:48.852741   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:48.857090   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:48.873715   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:48.948822   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:49.158626   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:49.168965   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:49.450779   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:49.658623   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:49.668721   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:49.949976   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:50.158672   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:50.168476   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:50.448309   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:50.659499   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:50.668547   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:50.948635   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:51.158641   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:51.168599   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:51.726346   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:51.726840   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:51.727465   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:51.729084   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:51.947925   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:52.165820   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:52.171308   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:52.449891   19117 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0531 17:58:52.658527   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:52.668567   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:52.948311   19117 kapi.go:107] duration metric: took 1m13.505445883s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0531 17:58:53.162666   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:53.168627   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:53.658796   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:53.668564   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:53.769657   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:54.158949   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:54.168456   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:54.658719   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:54.668331   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:55.161619   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:55.168327   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:55.659224   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:55.668667   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:56.158991   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:56.168546   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:56.270496   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:56.658872   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:56.668430   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:57.158955   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:57.167983   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:57.658333   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:57.667782   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0531 17:58:58.159337   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:58.168399   19117 kapi.go:107] duration metric: took 1m15.503487358s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0531 17:58:58.170126   19117 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-705710 cluster.
	I0531 17:58:58.171365   19117 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0531 17:58:58.172358   19117 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0531 17:58:58.270626   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:58:58.658304   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:59.158982   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:58:59.811034   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:59:00.158862   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:59:00.657661   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:59:00.773338   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:01.158459   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:59:01.658514   19117 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0531 17:59:02.186431   19117 kapi.go:107] duration metric: took 1m21.533521952s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0531 17:59:02.188248   19117 out.go:177] * Enabled addons: nvidia-device-plugin, metrics-server, storage-provisioner, inspektor-gadget, cloud-spanner, ingress-dns, helm-tiller, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0531 17:59:02.189477   19117 addons.go:510] duration metric: took 1m31.449519087s for enable addons: enabled=[nvidia-device-plugin metrics-server storage-provisioner inspektor-gadget cloud-spanner ingress-dns helm-tiller yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0531 17:59:03.269873   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:05.270355   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:07.769515   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:09.769698   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:11.770237   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:14.269811   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:16.769006   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:18.769350   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:20.769478   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:23.272040   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:25.771077   19117 pod_ready.go:102] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"False"
	I0531 17:59:27.270568   19117 pod_ready.go:92] pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace has status "Ready":"True"
	I0531 17:59:27.270593   19117 pod_ready.go:81] duration metric: took 1m46.507356532s for pod "metrics-server-c59844bb4-tq6bf" in "kube-system" namespace to be "Ready" ...
	I0531 17:59:27.270606   19117 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xtrgz" in "kube-system" namespace to be "Ready" ...
	I0531 17:59:27.276361   19117 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-xtrgz" in "kube-system" namespace has status "Ready":"True"
	I0531 17:59:27.276383   19117 pod_ready.go:81] duration metric: took 5.770195ms for pod "nvidia-device-plugin-daemonset-xtrgz" in "kube-system" namespace to be "Ready" ...
	I0531 17:59:27.276404   19117 pod_ready.go:38] duration metric: took 1m47.82702207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 17:59:27.276418   19117 api_server.go:52] waiting for apiserver process to appear ...
	I0531 17:59:27.276442   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0531 17:59:27.276492   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0531 17:59:27.329029   19117 cri.go:89] found id: "33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e"
	I0531 17:59:27.329055   19117 cri.go:89] found id: ""
	I0531 17:59:27.329065   19117 logs.go:276] 1 containers: [33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e]
	I0531 17:59:27.329119   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:27.333610   19117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0531 17:59:27.333665   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0531 17:59:27.373189   19117 cri.go:89] found id: "49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893"
	I0531 17:59:27.373213   19117 cri.go:89] found id: ""
	I0531 17:59:27.373224   19117 logs.go:276] 1 containers: [49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893]
	I0531 17:59:27.373264   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:27.377379   19117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0531 17:59:27.377426   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0531 17:59:27.417369   19117 cri.go:89] found id: "1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133"
	I0531 17:59:27.417393   19117 cri.go:89] found id: ""
	I0531 17:59:27.417402   19117 logs.go:276] 1 containers: [1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133]
	I0531 17:59:27.417454   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:27.421829   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0531 17:59:27.421886   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0531 17:59:27.461326   19117 cri.go:89] found id: "551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c"
	I0531 17:59:27.461349   19117 cri.go:89] found id: ""
	I0531 17:59:27.461359   19117 logs.go:276] 1 containers: [551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c]
	I0531 17:59:27.461406   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:27.465759   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0531 17:59:27.465821   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0531 17:59:27.511837   19117 cri.go:89] found id: "c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7"
	I0531 17:59:27.511859   19117 cri.go:89] found id: ""
	I0531 17:59:27.511867   19117 logs.go:276] 1 containers: [c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7]
	I0531 17:59:27.511911   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:27.516551   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0531 17:59:27.516605   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0531 17:59:27.553341   19117 cri.go:89] found id: "8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff"
	I0531 17:59:27.553360   19117 cri.go:89] found id: ""
	I0531 17:59:27.553367   19117 logs.go:276] 1 containers: [8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff]
	I0531 17:59:27.553417   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:27.557826   19117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0531 17:59:27.557875   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0531 17:59:27.600348   19117 cri.go:89] found id: ""
	I0531 17:59:27.600372   19117 logs.go:276] 0 containers: []
	W0531 17:59:27.600379   19117 logs.go:278] No container was found matching "kindnet"
	I0531 17:59:27.600388   19117 logs.go:123] Gathering logs for kube-proxy [c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7] ...
	I0531 17:59:27.600403   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7"
	I0531 17:59:27.639366   19117 logs.go:123] Gathering logs for kube-controller-manager [8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff] ...
	I0531 17:59:27.639396   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff"
	I0531 17:59:27.706920   19117 logs.go:123] Gathering logs for dmesg ...
	I0531 17:59:27.706952   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 17:59:27.721120   19117 logs.go:123] Gathering logs for kube-apiserver [33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e] ...
	I0531 17:59:27.721142   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e"
	I0531 17:59:27.773148   19117 logs.go:123] Gathering logs for etcd [49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893] ...
	I0531 17:59:27.773175   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893"
	I0531 17:59:27.834556   19117 logs.go:123] Gathering logs for coredns [1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133] ...
	I0531 17:59:27.834589   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133"
	I0531 17:59:27.876493   19117 logs.go:123] Gathering logs for kube-scheduler [551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c] ...
	I0531 17:59:27.876516   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c"
	I0531 17:59:27.920052   19117 logs.go:123] Gathering logs for kubelet ...
	I0531 17:59:27.920088   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0531 17:59:27.971091   19117 logs.go:138] Found kubelet problem: May 31 17:57:34 addons-705710 kubelet[1274]: W0531 17:57:34.308071    1274 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:27.971237   19117 logs.go:138] Found kubelet problem: May 31 17:57:34 addons-705710 kubelet[1274]: E0531 17:57:34.308120    1274 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:27.971365   19117 logs.go:138] Found kubelet problem: May 31 17:57:34 addons-705710 kubelet[1274]: W0531 17:57:34.308154    1274 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:27.971507   19117 logs.go:138] Found kubelet problem: May 31 17:57:34 addons-705710 kubelet[1274]: E0531 17:57:34.308163    1274 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:27.974223   19117 logs.go:138] Found kubelet problem: May 31 17:57:36 addons-705710 kubelet[1274]: W0531 17:57:36.455068    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	W0531 17:59:27.974385   19117 logs.go:138] Found kubelet problem: May 31 17:57:36 addons-705710 kubelet[1274]: E0531 17:57:36.455153    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	W0531 17:59:27.984206   19117 logs.go:138] Found kubelet problem: May 31 17:57:42 addons-705710 kubelet[1274]: W0531 17:57:42.645100    1274 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	W0531 17:59:27.984354   19117 logs.go:138] Found kubelet problem: May 31 17:57:42 addons-705710 kubelet[1274]: E0531 17:57:42.645150    1274 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	I0531 17:59:28.005386   19117 logs.go:123] Gathering logs for describe nodes ...
	I0531 17:59:28.005427   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0531 17:59:28.137679   19117 logs.go:123] Gathering logs for CRI-O ...
	I0531 17:59:28.137714   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0531 17:59:29.282593   19117 logs.go:123] Gathering logs for container status ...
	I0531 17:59:29.282636   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 17:59:29.329337   19117 out.go:304] Setting ErrFile to fd 2...
	I0531 17:59:29.329369   19117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0531 17:59:29.329417   19117 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0531 17:59:29.329432   19117 out.go:239]   May 31 17:57:34 addons-705710 kubelet[1274]: E0531 17:57:34.308163    1274 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:34 addons-705710 kubelet[1274]: E0531 17:57:34.308163    1274 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:29.329443   19117 out.go:239]   May 31 17:57:36 addons-705710 kubelet[1274]: W0531 17:57:36.455068    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:36 addons-705710 kubelet[1274]: W0531 17:57:36.455068    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	W0531 17:59:29.329452   19117 out.go:239]   May 31 17:57:36 addons-705710 kubelet[1274]: E0531 17:57:36.455153    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:36 addons-705710 kubelet[1274]: E0531 17:57:36.455153    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	W0531 17:59:29.329463   19117 out.go:239]   May 31 17:57:42 addons-705710 kubelet[1274]: W0531 17:57:42.645100    1274 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:42 addons-705710 kubelet[1274]: W0531 17:57:42.645100    1274 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	W0531 17:59:29.329471   19117 out.go:239]   May 31 17:57:42 addons-705710 kubelet[1274]: E0531 17:57:42.645150    1274 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:42 addons-705710 kubelet[1274]: E0531 17:57:42.645150    1274 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	I0531 17:59:29.329479   19117 out.go:304] Setting ErrFile to fd 2...
	I0531 17:59:29.329488   19117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 17:59:39.330999   19117 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 17:59:39.352637   19117 api_server.go:72] duration metric: took 2m8.612707843s to wait for apiserver process to appear ...
	I0531 17:59:39.352666   19117 api_server.go:88] waiting for apiserver healthz status ...
	I0531 17:59:39.352702   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0531 17:59:39.352758   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0531 17:59:39.396853   19117 cri.go:89] found id: "33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e"
	I0531 17:59:39.396872   19117 cri.go:89] found id: ""
	I0531 17:59:39.396881   19117 logs.go:276] 1 containers: [33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e]
	I0531 17:59:39.396931   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:39.401221   19117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0531 17:59:39.401279   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0531 17:59:39.441647   19117 cri.go:89] found id: "49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893"
	I0531 17:59:39.441666   19117 cri.go:89] found id: ""
	I0531 17:59:39.441673   19117 logs.go:276] 1 containers: [49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893]
	I0531 17:59:39.441715   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:39.445945   19117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0531 17:59:39.445991   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0531 17:59:39.482732   19117 cri.go:89] found id: "1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133"
	I0531 17:59:39.482751   19117 cri.go:89] found id: ""
	I0531 17:59:39.482759   19117 logs.go:276] 1 containers: [1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133]
	I0531 17:59:39.482802   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:39.486759   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0531 17:59:39.486808   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0531 17:59:39.522930   19117 cri.go:89] found id: "551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c"
	I0531 17:59:39.522946   19117 cri.go:89] found id: ""
	I0531 17:59:39.522952   19117 logs.go:276] 1 containers: [551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c]
	I0531 17:59:39.522988   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:39.527765   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0531 17:59:39.527817   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0531 17:59:39.565123   19117 cri.go:89] found id: "c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7"
	I0531 17:59:39.565138   19117 cri.go:89] found id: ""
	I0531 17:59:39.565145   19117 logs.go:276] 1 containers: [c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7]
	I0531 17:59:39.565175   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:39.569571   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0531 17:59:39.569614   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0531 17:59:39.616197   19117 cri.go:89] found id: "8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff"
	I0531 17:59:39.616211   19117 cri.go:89] found id: ""
	I0531 17:59:39.616218   19117 logs.go:276] 1 containers: [8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff]
	I0531 17:59:39.616255   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:39.621308   19117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0531 17:59:39.621347   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0531 17:59:39.660551   19117 cri.go:89] found id: ""
	I0531 17:59:39.660572   19117 logs.go:276] 0 containers: []
	W0531 17:59:39.660579   19117 logs.go:278] No container was found matching "kindnet"
	I0531 17:59:39.660586   19117 logs.go:123] Gathering logs for kubelet ...
	I0531 17:59:39.660596   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0531 17:59:39.704651   19117 logs.go:138] Found kubelet problem: May 31 17:57:34 addons-705710 kubelet[1274]: W0531 17:57:34.308071    1274 reflector.go:547] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:39.704794   19117 logs.go:138] Found kubelet problem: May 31 17:57:34 addons-705710 kubelet[1274]: E0531 17:57:34.308120    1274 reflector.go:150] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:39.704921   19117 logs.go:138] Found kubelet problem: May 31 17:57:34 addons-705710 kubelet[1274]: W0531 17:57:34.308154    1274 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:39.705068   19117 logs.go:138] Found kubelet problem: May 31 17:57:34 addons-705710 kubelet[1274]: E0531 17:57:34.308163    1274 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:39.707554   19117 logs.go:138] Found kubelet problem: May 31 17:57:36 addons-705710 kubelet[1274]: W0531 17:57:36.455068    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	W0531 17:59:39.707700   19117 logs.go:138] Found kubelet problem: May 31 17:57:36 addons-705710 kubelet[1274]: E0531 17:57:36.455153    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	W0531 17:59:39.716945   19117 logs.go:138] Found kubelet problem: May 31 17:57:42 addons-705710 kubelet[1274]: W0531 17:57:42.645100    1274 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	W0531 17:59:39.717092   19117 logs.go:138] Found kubelet problem: May 31 17:57:42 addons-705710 kubelet[1274]: E0531 17:57:42.645150    1274 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	I0531 17:59:39.737526   19117 logs.go:123] Gathering logs for describe nodes ...
	I0531 17:59:39.737542   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0531 17:59:39.849952   19117 logs.go:123] Gathering logs for kube-scheduler [551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c] ...
	I0531 17:59:39.849979   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c"
	I0531 17:59:39.892732   19117 logs.go:123] Gathering logs for kube-controller-manager [8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff] ...
	I0531 17:59:39.892756   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff"
	I0531 17:59:39.970154   19117 logs.go:123] Gathering logs for CRI-O ...
	I0531 17:59:39.970186   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0531 17:59:40.681291   19117 logs.go:123] Gathering logs for container status ...
	I0531 17:59:40.681325   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 17:59:40.729719   19117 logs.go:123] Gathering logs for dmesg ...
	I0531 17:59:40.729748   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 17:59:40.745132   19117 logs.go:123] Gathering logs for kube-apiserver [33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e] ...
	I0531 17:59:40.745157   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e"
	I0531 17:59:40.796422   19117 logs.go:123] Gathering logs for etcd [49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893] ...
	I0531 17:59:40.796445   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893"
	I0531 17:59:40.858286   19117 logs.go:123] Gathering logs for coredns [1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133] ...
	I0531 17:59:40.858311   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133"
	I0531 17:59:40.897729   19117 logs.go:123] Gathering logs for kube-proxy [c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7] ...
	I0531 17:59:40.897761   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7"
	I0531 17:59:40.935949   19117 out.go:304] Setting ErrFile to fd 2...
	I0531 17:59:40.935983   19117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0531 17:59:40.936035   19117 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0531 17:59:40.936049   19117 out.go:239]   May 31 17:57:34 addons-705710 kubelet[1274]: E0531 17:57:34.308163    1274 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:34 addons-705710 kubelet[1274]: E0531 17:57:34.308163    1274 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-705710' and this object
	W0531 17:59:40.936056   19117 out.go:239]   May 31 17:57:36 addons-705710 kubelet[1274]: W0531 17:57:36.455068    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:36 addons-705710 kubelet[1274]: W0531 17:57:36.455068    1274 reflector.go:547] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	W0531 17:59:40.936073   19117 out.go:239]   May 31 17:57:36 addons-705710 kubelet[1274]: E0531 17:57:36.455153    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:36 addons-705710 kubelet[1274]: E0531 17:57:36.455153    1274 reflector.go:150] object-"local-path-storage"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-705710" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-705710' and this object
	W0531 17:59:40.936086   19117 out.go:239]   May 31 17:57:42 addons-705710 kubelet[1274]: W0531 17:57:42.645100    1274 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:42 addons-705710 kubelet[1274]: W0531 17:57:42.645100    1274 reflector.go:547] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	W0531 17:59:40.936096   19117 out.go:239]   May 31 17:57:42 addons-705710 kubelet[1274]: E0531 17:57:42.645150    1274 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	  May 31 17:57:42 addons-705710 kubelet[1274]: E0531 17:57:42.645150    1274 reflector.go:150] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-705710" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-705710' and this object
	I0531 17:59:40.936104   19117 out.go:304] Setting ErrFile to fd 2...
	I0531 17:59:40.936112   19117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 17:59:50.938038   19117 api_server.go:253] Checking apiserver healthz at https://192.168.39.124:8443/healthz ...
	I0531 17:59:50.943071   19117 api_server.go:279] https://192.168.39.124:8443/healthz returned 200:
	ok
	I0531 17:59:50.944035   19117 api_server.go:141] control plane version: v1.30.1
	I0531 17:59:50.944057   19117 api_server.go:131] duration metric: took 11.591383336s to wait for apiserver health ...
	I0531 17:59:50.944067   19117 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 17:59:50.944088   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0531 17:59:50.944149   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0531 17:59:50.983883   19117 cri.go:89] found id: "33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e"
	I0531 17:59:50.983901   19117 cri.go:89] found id: ""
	I0531 17:59:50.983911   19117 logs.go:276] 1 containers: [33ddd5732ca6f38cd0962dac4cc6744102be657ae1d7ae91ba5ade339b06aa9e]
	I0531 17:59:50.983962   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:50.988985   19117 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0531 17:59:50.989033   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0531 17:59:51.039508   19117 cri.go:89] found id: "49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893"
	I0531 17:59:51.039534   19117 cri.go:89] found id: ""
	I0531 17:59:51.039542   19117 logs.go:276] 1 containers: [49c69fe6c178f6d831e136bd7d2d86ed44d88f442505580ea88d6133e2286893]
	I0531 17:59:51.039588   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:51.044908   19117 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0531 17:59:51.044983   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0531 17:59:51.090982   19117 cri.go:89] found id: "1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133"
	I0531 17:59:51.091001   19117 cri.go:89] found id: ""
	I0531 17:59:51.091009   19117 logs.go:276] 1 containers: [1973d926874aacc06f2b645877939401ff303f6f5cf7978b8a029fc902de7133]
	I0531 17:59:51.091062   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:51.095323   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0531 17:59:51.095369   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0531 17:59:51.132607   19117 cri.go:89] found id: "551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c"
	I0531 17:59:51.132626   19117 cri.go:89] found id: ""
	I0531 17:59:51.132644   19117 logs.go:276] 1 containers: [551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c]
	I0531 17:59:51.132700   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:51.136671   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0531 17:59:51.136724   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0531 17:59:51.184552   19117 cri.go:89] found id: "c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7"
	I0531 17:59:51.184571   19117 cri.go:89] found id: ""
	I0531 17:59:51.184578   19117 logs.go:276] 1 containers: [c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7]
	I0531 17:59:51.184633   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:51.190587   19117 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0531 17:59:51.190640   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0531 17:59:51.232381   19117 cri.go:89] found id: "8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff"
	I0531 17:59:51.232404   19117 cri.go:89] found id: ""
	I0531 17:59:51.232414   19117 logs.go:276] 1 containers: [8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff]
	I0531 17:59:51.232469   19117 ssh_runner.go:195] Run: which crictl
	I0531 17:59:51.236412   19117 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0531 17:59:51.236460   19117 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0531 17:59:51.277344   19117 cri.go:89] found id: ""
	I0531 17:59:51.277372   19117 logs.go:276] 0 containers: []
	W0531 17:59:51.277380   19117 logs.go:278] No container was found matching "kindnet"
	I0531 17:59:51.277387   19117 logs.go:123] Gathering logs for kube-scheduler [551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c] ...
	I0531 17:59:51.277405   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 551466326f5dc9f1e11ad06936e752750ab28bfecd342cd277de35e06f33533c"
	I0531 17:59:51.336555   19117 logs.go:123] Gathering logs for kube-proxy [c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7] ...
	I0531 17:59:51.336580   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4e98d70d2d72dbecaa5b4f63135306907b7d49e88cb6dc6d41dde5d3a6601f7"
	I0531 17:59:51.377582   19117 logs.go:123] Gathering logs for kube-controller-manager [8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff] ...
	I0531 17:59:51.377605   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ffc7eef7652a8ca3c4e55b4ee8e4891528403d4ace8a89decdce308c0d1b2ff"
	I0531 17:59:51.437860   19117 logs.go:123] Gathering logs for CRI-O ...
	I0531 17:59:51.437886   19117 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-705710 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 ssh pgrep buildkitd: exit status 1 (188.551131ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image build -t localhost/my-image:functional-928436 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 image build -t localhost/my-image:functional-928436 testdata/build --alsologtostderr: (3.434719147s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-928436 image build -t localhost/my-image:functional-928436 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ed94c4a79c9
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-928436
--> 2f397eb0719
Successfully tagged localhost/my-image:functional-928436
2f397eb07193d878c2ea20a1ff0abe9bdbebf7ff62ef65de433579a1267838ba
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-928436 image build -t localhost/my-image:functional-928436 testdata/build --alsologtostderr:
I0531 18:40:31.228724   32808 out.go:291] Setting OutFile to fd 1 ...
I0531 18:40:31.228988   32808 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:31.228999   32808 out.go:304] Setting ErrFile to fd 2...
I0531 18:40:31.229006   32808 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:31.229169   32808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
I0531 18:40:31.229654   32808 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:31.230139   32808 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:31.230503   32808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:31.230544   32808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:31.245006   32808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40761
I0531 18:40:31.245372   32808 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:31.245954   32808 main.go:141] libmachine: Using API Version  1
I0531 18:40:31.245977   32808 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:31.246303   32808 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:31.246516   32808 main.go:141] libmachine: (functional-928436) Calling .GetState
I0531 18:40:31.248383   32808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:31.248431   32808 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:31.262522   32808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38947
I0531 18:40:31.262895   32808 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:31.263344   32808 main.go:141] libmachine: Using API Version  1
I0531 18:40:31.263363   32808 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:31.263693   32808 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:31.263894   32808 main.go:141] libmachine: (functional-928436) Calling .DriverName
I0531 18:40:31.264080   32808 ssh_runner.go:195] Run: systemctl --version
I0531 18:40:31.264098   32808 main.go:141] libmachine: (functional-928436) Calling .GetSSHHostname
I0531 18:40:31.266795   32808 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:31.267161   32808 main.go:141] libmachine: (functional-928436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:85:f7", ip: ""} in network mk-functional-928436: {Iface:virbr1 ExpiryTime:2024-05-31 19:37:37 +0000 UTC Type:0 Mac:52:54:00:03:85:f7 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-928436 Clientid:01:52:54:00:03:85:f7}
I0531 18:40:31.267188   32808 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined IP address 192.168.39.117 and MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:31.267255   32808 main.go:141] libmachine: (functional-928436) Calling .GetSSHPort
I0531 18:40:31.267406   32808 main.go:141] libmachine: (functional-928436) Calling .GetSSHKeyPath
I0531 18:40:31.267576   32808 main.go:141] libmachine: (functional-928436) Calling .GetSSHUsername
I0531 18:40:31.267732   32808 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/functional-928436/id_rsa Username:docker}
I0531 18:40:31.348809   32808 build_images.go:161] Building image from path: /tmp/build.729598464.tar
I0531 18:40:31.348882   32808 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0531 18:40:31.359648   32808 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.729598464.tar
I0531 18:40:31.363756   32808 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.729598464.tar: stat -c "%s %y" /var/lib/minikube/build/build.729598464.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.729598464.tar': No such file or directory
I0531 18:40:31.363787   32808 ssh_runner.go:362] scp /tmp/build.729598464.tar --> /var/lib/minikube/build/build.729598464.tar (3072 bytes)
I0531 18:40:31.390203   32808 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.729598464
I0531 18:40:31.400174   32808 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.729598464 -xf /var/lib/minikube/build/build.729598464.tar
I0531 18:40:31.410313   32808 crio.go:315] Building image: /var/lib/minikube/build/build.729598464
I0531 18:40:31.410374   32808 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-928436 /var/lib/minikube/build/build.729598464 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0531 18:40:34.579341   32808 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-928436 /var/lib/minikube/build/build.729598464 --cgroup-manager=cgroupfs: (3.168936886s)
I0531 18:40:34.579408   32808 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.729598464
I0531 18:40:34.605012   32808 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.729598464.tar
I0531 18:40:34.617382   32808 build_images.go:217] Built localhost/my-image:functional-928436 from /tmp/build.729598464.tar
I0531 18:40:34.617409   32808 build_images.go:133] succeeded building to: functional-928436
I0531 18:40:34.617413   32808 build_images.go:134] failed building to: 
I0531 18:40:34.617432   32808 main.go:141] libmachine: Making call to close driver server
I0531 18:40:34.617440   32808 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:34.617696   32808 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:34.617712   32808 main.go:141] libmachine: Making call to close connection to plugin binary
I0531 18:40:34.617721   32808 main.go:141] libmachine: Making call to close driver server
I0531 18:40:34.617728   32808 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:34.617941   32808 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:34.617968   32808 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 image ls: (2.233009596s)
functional_test.go:442: expected "localhost/my-image:functional-928436" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (5.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 node stop m02 -v=7 --alsologtostderr
E0531 18:45:39.878924   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:46:20.839646   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.465316579s)

                                                
                                                
-- stdout --
	* Stopping node "ha-080809-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:45:24.444647   37152 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:45:24.444919   37152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:45:24.444930   37152 out.go:304] Setting ErrFile to fd 2...
	I0531 18:45:24.444934   37152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:45:24.445137   37152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:45:24.445376   37152 mustload.go:65] Loading cluster: ha-080809
	I0531 18:45:24.445761   37152 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:45:24.445778   37152 stop.go:39] StopHost: ha-080809-m02
	I0531 18:45:24.446161   37152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:45:24.446201   37152 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:45:24.461290   37152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0531 18:45:24.461766   37152 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:45:24.462300   37152 main.go:141] libmachine: Using API Version  1
	I0531 18:45:24.462323   37152 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:45:24.462655   37152 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:45:24.465010   37152 out.go:177] * Stopping node "ha-080809-m02"  ...
	I0531 18:45:24.466269   37152 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0531 18:45:24.466288   37152 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:45:24.466485   37152 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0531 18:45:24.466506   37152 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:45:24.469534   37152 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:45:24.469985   37152 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:45:24.470020   37152 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:45:24.470177   37152 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:45:24.470363   37152 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:45:24.470555   37152 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:45:24.470871   37152 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:45:24.554608   37152 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0531 18:45:24.610476   37152 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0531 18:45:24.665866   37152 main.go:141] libmachine: Stopping "ha-080809-m02"...
	I0531 18:45:24.665934   37152 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:45:24.667703   37152 main.go:141] libmachine: (ha-080809-m02) Calling .Stop
	I0531 18:45:24.671393   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 0/120
	I0531 18:45:25.672690   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 1/120
	I0531 18:45:26.674502   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 2/120
	I0531 18:45:27.676784   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 3/120
	I0531 18:45:28.678096   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 4/120
	I0531 18:45:29.680013   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 5/120
	I0531 18:45:30.681298   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 6/120
	I0531 18:45:31.683522   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 7/120
	I0531 18:45:32.684754   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 8/120
	I0531 18:45:33.685903   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 9/120
	I0531 18:45:34.687914   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 10/120
	I0531 18:45:35.689279   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 11/120
	I0531 18:45:36.691335   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 12/120
	I0531 18:45:37.692605   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 13/120
	I0531 18:45:38.694193   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 14/120
	I0531 18:45:39.695557   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 15/120
	I0531 18:45:40.696713   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 16/120
	I0531 18:45:41.698154   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 17/120
	I0531 18:45:42.699799   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 18/120
	I0531 18:45:43.701215   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 19/120
	I0531 18:45:44.703518   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 20/120
	I0531 18:45:45.704794   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 21/120
	I0531 18:45:46.706100   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 22/120
	I0531 18:45:47.707268   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 23/120
	I0531 18:45:48.708585   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 24/120
	I0531 18:45:49.710083   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 25/120
	I0531 18:45:50.711438   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 26/120
	I0531 18:45:51.713535   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 27/120
	I0531 18:45:52.715493   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 28/120
	I0531 18:45:53.716807   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 29/120
	I0531 18:45:54.718758   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 30/120
	I0531 18:45:55.720174   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 31/120
	I0531 18:45:56.721523   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 32/120
	I0531 18:45:57.723615   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 33/120
	I0531 18:45:58.724944   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 34/120
	I0531 18:45:59.726941   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 35/120
	I0531 18:46:00.728894   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 36/120
	I0531 18:46:01.730166   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 37/120
	I0531 18:46:02.731741   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 38/120
	I0531 18:46:03.733091   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 39/120
	I0531 18:46:04.734675   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 40/120
	I0531 18:46:05.736721   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 41/120
	I0531 18:46:06.738004   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 42/120
	I0531 18:46:07.740067   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 43/120
	I0531 18:46:08.741388   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 44/120
	I0531 18:46:09.743225   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 45/120
	I0531 18:46:10.744726   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 46/120
	I0531 18:46:11.747165   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 47/120
	I0531 18:46:12.748851   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 48/120
	I0531 18:46:13.750015   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 49/120
	I0531 18:46:14.751975   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 50/120
	I0531 18:46:15.753173   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 51/120
	I0531 18:46:16.755602   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 52/120
	I0531 18:46:17.757093   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 53/120
	I0531 18:46:18.758156   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 54/120
	I0531 18:46:19.759858   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 55/120
	I0531 18:46:20.761286   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 56/120
	I0531 18:46:21.763550   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 57/120
	I0531 18:46:22.764854   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 58/120
	I0531 18:46:23.766064   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 59/120
	I0531 18:46:24.767951   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 60/120
	I0531 18:46:25.769151   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 61/120
	I0531 18:46:26.771482   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 62/120
	I0531 18:46:27.772792   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 63/120
	I0531 18:46:28.775228   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 64/120
	I0531 18:46:29.777047   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 65/120
	I0531 18:46:30.778204   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 66/120
	I0531 18:46:31.779622   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 67/120
	I0531 18:46:32.781089   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 68/120
	I0531 18:46:33.783280   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 69/120
	I0531 18:46:34.785343   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 70/120
	I0531 18:46:35.787370   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 71/120
	I0531 18:46:36.789244   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 72/120
	I0531 18:46:37.791419   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 73/120
	I0531 18:46:38.792841   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 74/120
	I0531 18:46:39.794686   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 75/120
	I0531 18:46:40.796607   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 76/120
	I0531 18:46:41.797918   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 77/120
	I0531 18:46:42.799394   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 78/120
	I0531 18:46:43.800906   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 79/120
	I0531 18:46:44.802677   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 80/120
	I0531 18:46:45.804934   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 81/120
	I0531 18:46:46.806736   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 82/120
	I0531 18:46:47.808017   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 83/120
	I0531 18:46:48.809325   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 84/120
	I0531 18:46:49.811114   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 85/120
	I0531 18:46:50.812362   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 86/120
	I0531 18:46:51.814426   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 87/120
	I0531 18:46:52.815929   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 88/120
	I0531 18:46:53.817546   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 89/120
	I0531 18:46:54.819690   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 90/120
	I0531 18:46:55.821459   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 91/120
	I0531 18:46:56.823378   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 92/120
	I0531 18:46:57.824570   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 93/120
	I0531 18:46:58.825825   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 94/120
	I0531 18:46:59.827286   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 95/120
	I0531 18:47:00.828579   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 96/120
	I0531 18:47:01.829927   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 97/120
	I0531 18:47:02.831228   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 98/120
	I0531 18:47:03.832653   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 99/120
	I0531 18:47:04.834737   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 100/120
	I0531 18:47:05.836385   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 101/120
	I0531 18:47:06.837604   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 102/120
	I0531 18:47:07.838853   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 103/120
	I0531 18:47:08.840803   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 104/120
	I0531 18:47:09.842730   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 105/120
	I0531 18:47:10.844422   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 106/120
	I0531 18:47:11.846051   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 107/120
	I0531 18:47:12.847243   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 108/120
	I0531 18:47:13.848525   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 109/120
	I0531 18:47:14.850721   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 110/120
	I0531 18:47:15.852502   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 111/120
	I0531 18:47:16.853821   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 112/120
	I0531 18:47:17.855530   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 113/120
	I0531 18:47:18.857042   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 114/120
	I0531 18:47:19.858775   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 115/120
	I0531 18:47:20.860225   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 116/120
	I0531 18:47:21.862059   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 117/120
	I0531 18:47:22.863849   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 118/120
	I0531 18:47:23.865684   37152 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 119/120
	I0531 18:47:24.866497   37152 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0531 18:47:24.866617   37152 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-080809 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
E0531 18:47:42.760290   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 3 (19.077068821s)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:47:24.910871   37614 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:47:24.911126   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:24.911136   37614 out.go:304] Setting ErrFile to fd 2...
	I0531 18:47:24.911140   37614 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:24.911335   37614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:47:24.911497   37614 out.go:298] Setting JSON to false
	I0531 18:47:24.911520   37614 mustload.go:65] Loading cluster: ha-080809
	I0531 18:47:24.911640   37614 notify.go:220] Checking for updates...
	I0531 18:47:24.912028   37614 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:47:24.912047   37614 status.go:255] checking status of ha-080809 ...
	I0531 18:47:24.912538   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:24.912599   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:24.928593   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I0531 18:47:24.929067   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:24.929735   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:24.929762   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:24.930197   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:24.930436   37614 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:47:24.932095   37614 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 18:47:24.932118   37614 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:47:24.932398   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:24.932432   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:24.946804   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44551
	I0531 18:47:24.947247   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:24.947665   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:24.947694   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:24.948016   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:24.948182   37614 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:47:24.950845   37614 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:24.951229   37614 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:47:24.951255   37614 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:24.951378   37614 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:47:24.951645   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:24.951698   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:24.966003   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0531 18:47:24.966366   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:24.966855   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:24.966888   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:24.967184   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:24.967386   37614 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:47:24.967557   37614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:24.967594   37614 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:47:24.969916   37614 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:24.970354   37614 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:47:24.970385   37614 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:24.970520   37614 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:47:24.970685   37614 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:47:24.970851   37614 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:47:24.970998   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:47:25.058728   37614 ssh_runner.go:195] Run: systemctl --version
	I0531 18:47:25.065914   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:25.083610   37614 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:47:25.083641   37614 api_server.go:166] Checking apiserver status ...
	I0531 18:47:25.083671   37614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:47:25.100042   37614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0531 18:47:25.110033   37614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:47:25.110077   37614 ssh_runner.go:195] Run: ls
	I0531 18:47:25.115085   37614 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:47:25.119398   37614 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:47:25.119420   37614 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 18:47:25.119429   37614 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:25.119452   37614 status.go:255] checking status of ha-080809-m02 ...
	I0531 18:47:25.119761   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:25.119798   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:25.135019   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34917
	I0531 18:47:25.135471   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:25.136003   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:25.136023   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:25.136330   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:25.136505   37614 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:47:25.137942   37614 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 18:47:25.137959   37614 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:47:25.138229   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:25.138265   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:25.153404   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0531 18:47:25.153786   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:25.154187   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:25.154219   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:25.154541   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:25.154734   37614 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:47:25.157770   37614 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:25.158166   37614 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:47:25.158185   37614 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:25.158360   37614 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:47:25.158650   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:25.158689   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:25.174179   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35441
	I0531 18:47:25.174553   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:25.175016   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:25.175037   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:25.175407   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:25.175611   37614 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:47:25.175775   37614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:25.175796   37614 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:47:25.178153   37614 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:25.178517   37614 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:47:25.178542   37614 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:25.178620   37614 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:47:25.178767   37614 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:47:25.178898   37614 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:47:25.179003   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	W0531 18:47:43.581193   37614 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:47:43.581290   37614 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0531 18:47:43.581305   37614 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:47:43.581311   37614 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0531 18:47:43.581329   37614 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:47:43.581341   37614 status.go:255] checking status of ha-080809-m03 ...
	I0531 18:47:43.581619   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:43.581657   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:43.597305   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45159
	I0531 18:47:43.597703   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:43.598173   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:43.598192   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:43.598524   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:43.598717   37614 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:47:43.600202   37614 status.go:330] ha-080809-m03 host status = "Running" (err=<nil>)
	I0531 18:47:43.600227   37614 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:47:43.600570   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:43.600611   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:43.615135   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46773
	I0531 18:47:43.615519   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:43.615997   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:43.616018   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:43.616308   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:43.616493   37614 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:47:43.619251   37614 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:43.619612   37614 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:47:43.619632   37614 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:43.619781   37614 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:47:43.620100   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:43.620138   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:43.635109   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44975
	I0531 18:47:43.635539   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:43.635971   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:43.635992   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:43.636316   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:43.636497   37614 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:47:43.636686   37614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:43.636707   37614 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:47:43.639225   37614 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:43.639580   37614 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:47:43.639610   37614 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:43.639708   37614 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:47:43.639881   37614 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:47:43.640037   37614 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:47:43.640158   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:47:43.730522   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:43.748832   37614 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:47:43.748857   37614 api_server.go:166] Checking apiserver status ...
	I0531 18:47:43.748885   37614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:47:43.764875   37614 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0531 18:47:43.774921   37614 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:47:43.774964   37614 ssh_runner.go:195] Run: ls
	I0531 18:47:43.779764   37614 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:47:43.784129   37614 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:47:43.784148   37614 status.go:422] ha-080809-m03 apiserver status = Running (err=<nil>)
	I0531 18:47:43.784167   37614 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:43.784184   37614 status.go:255] checking status of ha-080809-m04 ...
	I0531 18:47:43.784510   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:43.784550   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:43.799188   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44793
	I0531 18:47:43.799635   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:43.800070   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:43.800092   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:43.800401   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:43.800562   37614 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:47:43.802253   37614 status.go:330] ha-080809-m04 host status = "Running" (err=<nil>)
	I0531 18:47:43.802271   37614 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:47:43.802659   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:43.802708   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:43.817880   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45829
	I0531 18:47:43.818270   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:43.818657   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:43.818680   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:43.818965   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:43.819166   37614 main.go:141] libmachine: (ha-080809-m04) Calling .GetIP
	I0531 18:47:43.821697   37614 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:43.822099   37614 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:47:43.822123   37614 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:43.822253   37614 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:47:43.822527   37614 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:43.822572   37614 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:43.837867   37614 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43411
	I0531 18:47:43.838278   37614 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:43.838682   37614 main.go:141] libmachine: Using API Version  1
	I0531 18:47:43.838704   37614 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:43.838950   37614 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:43.839166   37614 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:47:43.839367   37614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:43.839391   37614 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:47:43.841958   37614 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:43.842372   37614 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:47:43.842402   37614 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:43.842527   37614 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:47:43.842682   37614 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:47:43.842852   37614 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:47:43.842990   37614 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:47:43.925970   37614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:43.944129   37614 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-080809 -n ha-080809
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-080809 logs -n 25: (1.418831374s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809:/home/docker/cp-test_ha-080809-m03_ha-080809.txt                       |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809 sudo cat                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809.txt                                 |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m04 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp testdata/cp-test.txt                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809:/home/docker/cp-test_ha-080809-m04_ha-080809.txt                       |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809 sudo cat                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809.txt                                 |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03:/home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m03 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-080809 node stop m02 -v=7                                                     | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 18:40:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:40:49.320932   33166 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:40:49.321060   33166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:40:49.321070   33166 out.go:304] Setting ErrFile to fd 2...
	I0531 18:40:49.321076   33166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:40:49.321266   33166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:40:49.321800   33166 out.go:298] Setting JSON to false
	I0531 18:40:49.322588   33166 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4995,"bootTime":1717175854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:40:49.322636   33166 start.go:139] virtualization: kvm guest
	I0531 18:40:49.324727   33166 out.go:177] * [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:40:49.326253   33166 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 18:40:49.327340   33166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:40:49.326246   33166 notify.go:220] Checking for updates...
	I0531 18:40:49.329352   33166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:40:49.330514   33166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:40:49.331659   33166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:40:49.332801   33166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:40:49.334056   33166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 18:40:49.366038   33166 out.go:177] * Using the kvm2 driver based on user configuration
	I0531 18:40:49.367249   33166 start.go:297] selected driver: kvm2
	I0531 18:40:49.367264   33166 start.go:901] validating driver "kvm2" against <nil>
	I0531 18:40:49.367273   33166 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:40:49.367920   33166 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:40:49.368003   33166 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 18:40:49.381822   33166 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 18:40:49.381856   33166 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0531 18:40:49.382087   33166 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:40:49.382131   33166 cni.go:84] Creating CNI manager for ""
	I0531 18:40:49.382142   33166 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0531 18:40:49.382146   33166 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:40:49.382194   33166 start.go:340] cluster config:
	{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0531 18:40:49.382282   33166 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:40:49.383818   33166 out.go:177] * Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	I0531 18:40:49.384909   33166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:40:49.384931   33166 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 18:40:49.384940   33166 cache.go:56] Caching tarball of preloaded images
	I0531 18:40:49.385046   33166 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:40:49.385058   33166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:40:49.385316   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:40:49.385333   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json: {Name:mkb41d6e60f69eefd53f74ad11a08ceecb99e6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:40:49.385453   33166 start.go:360] acquireMachinesLock for ha-080809: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:40:49.385485   33166 start.go:364] duration metric: took 20.431µs to acquireMachinesLock for "ha-080809"
	I0531 18:40:49.385504   33166 start.go:93] Provisioning new machine with config: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:40:49.385560   33166 start.go:125] createHost starting for "" (driver="kvm2")
	I0531 18:40:49.386980   33166 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0531 18:40:49.387078   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:40:49.387110   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:40:49.400415   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0531 18:40:49.400770   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:40:49.401282   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:40:49.401301   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:40:49.401661   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:40:49.401851   33166 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:40:49.402017   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:40:49.402162   33166 start.go:159] libmachine.API.Create for "ha-080809" (driver="kvm2")
	I0531 18:40:49.402194   33166 client.go:168] LocalClient.Create starting
	I0531 18:40:49.402246   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 18:40:49.402284   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:40:49.402308   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:40:49.402380   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 18:40:49.402406   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:40:49.402426   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:40:49.402452   33166 main.go:141] libmachine: Running pre-create checks...
	I0531 18:40:49.402473   33166 main.go:141] libmachine: (ha-080809) Calling .PreCreateCheck
	I0531 18:40:49.402806   33166 main.go:141] libmachine: (ha-080809) Calling .GetConfigRaw
	I0531 18:40:49.403148   33166 main.go:141] libmachine: Creating machine...
	I0531 18:40:49.403161   33166 main.go:141] libmachine: (ha-080809) Calling .Create
	I0531 18:40:49.403288   33166 main.go:141] libmachine: (ha-080809) Creating KVM machine...
	I0531 18:40:49.404469   33166 main.go:141] libmachine: (ha-080809) DBG | found existing default KVM network
	I0531 18:40:49.405113   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.404947   33189 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0531 18:40:49.405128   33166 main.go:141] libmachine: (ha-080809) DBG | created network xml: 
	I0531 18:40:49.405138   33166 main.go:141] libmachine: (ha-080809) DBG | <network>
	I0531 18:40:49.405142   33166 main.go:141] libmachine: (ha-080809) DBG |   <name>mk-ha-080809</name>
	I0531 18:40:49.405148   33166 main.go:141] libmachine: (ha-080809) DBG |   <dns enable='no'/>
	I0531 18:40:49.405157   33166 main.go:141] libmachine: (ha-080809) DBG |   
	I0531 18:40:49.405170   33166 main.go:141] libmachine: (ha-080809) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0531 18:40:49.405180   33166 main.go:141] libmachine: (ha-080809) DBG |     <dhcp>
	I0531 18:40:49.405190   33166 main.go:141] libmachine: (ha-080809) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0531 18:40:49.405203   33166 main.go:141] libmachine: (ha-080809) DBG |     </dhcp>
	I0531 18:40:49.405211   33166 main.go:141] libmachine: (ha-080809) DBG |   </ip>
	I0531 18:40:49.405215   33166 main.go:141] libmachine: (ha-080809) DBG |   
	I0531 18:40:49.405221   33166 main.go:141] libmachine: (ha-080809) DBG | </network>
	I0531 18:40:49.405227   33166 main.go:141] libmachine: (ha-080809) DBG | 
	I0531 18:40:49.409855   33166 main.go:141] libmachine: (ha-080809) DBG | trying to create private KVM network mk-ha-080809 192.168.39.0/24...
	I0531 18:40:49.470492   33166 main.go:141] libmachine: (ha-080809) DBG | private KVM network mk-ha-080809 192.168.39.0/24 created
	I0531 18:40:49.470564   33166 main.go:141] libmachine: (ha-080809) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809 ...
	I0531 18:40:49.470582   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.470452   33189 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:40:49.470603   33166 main.go:141] libmachine: (ha-080809) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 18:40:49.470635   33166 main.go:141] libmachine: (ha-080809) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 18:40:49.690168   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.690033   33189 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa...
	I0531 18:40:49.851454   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.851328   33189 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/ha-080809.rawdisk...
	I0531 18:40:49.851480   33166 main.go:141] libmachine: (ha-080809) DBG | Writing magic tar header
	I0531 18:40:49.851490   33166 main.go:141] libmachine: (ha-080809) DBG | Writing SSH key tar header
	I0531 18:40:49.851497   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.851445   33189 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809 ...
	I0531 18:40:49.851582   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809
	I0531 18:40:49.851605   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809 (perms=drwx------)
	I0531 18:40:49.851612   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 18:40:49.851619   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 18:40:49.851629   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 18:40:49.851635   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 18:40:49.851643   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 18:40:49.851650   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 18:40:49.851656   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:40:49.851663   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 18:40:49.851669   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 18:40:49.851675   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins
	I0531 18:40:49.851681   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home
	I0531 18:40:49.851686   33166 main.go:141] libmachine: (ha-080809) Creating domain...
	I0531 18:40:49.851695   33166 main.go:141] libmachine: (ha-080809) DBG | Skipping /home - not owner
	I0531 18:40:49.852691   33166 main.go:141] libmachine: (ha-080809) define libvirt domain using xml: 
	I0531 18:40:49.852718   33166 main.go:141] libmachine: (ha-080809) <domain type='kvm'>
	I0531 18:40:49.852728   33166 main.go:141] libmachine: (ha-080809)   <name>ha-080809</name>
	I0531 18:40:49.852740   33166 main.go:141] libmachine: (ha-080809)   <memory unit='MiB'>2200</memory>
	I0531 18:40:49.852749   33166 main.go:141] libmachine: (ha-080809)   <vcpu>2</vcpu>
	I0531 18:40:49.852759   33166 main.go:141] libmachine: (ha-080809)   <features>
	I0531 18:40:49.852766   33166 main.go:141] libmachine: (ha-080809)     <acpi/>
	I0531 18:40:49.852773   33166 main.go:141] libmachine: (ha-080809)     <apic/>
	I0531 18:40:49.852778   33166 main.go:141] libmachine: (ha-080809)     <pae/>
	I0531 18:40:49.852788   33166 main.go:141] libmachine: (ha-080809)     
	I0531 18:40:49.852795   33166 main.go:141] libmachine: (ha-080809)   </features>
	I0531 18:40:49.852800   33166 main.go:141] libmachine: (ha-080809)   <cpu mode='host-passthrough'>
	I0531 18:40:49.852808   33166 main.go:141] libmachine: (ha-080809)   
	I0531 18:40:49.852816   33166 main.go:141] libmachine: (ha-080809)   </cpu>
	I0531 18:40:49.852856   33166 main.go:141] libmachine: (ha-080809)   <os>
	I0531 18:40:49.852879   33166 main.go:141] libmachine: (ha-080809)     <type>hvm</type>
	I0531 18:40:49.852890   33166 main.go:141] libmachine: (ha-080809)     <boot dev='cdrom'/>
	I0531 18:40:49.852900   33166 main.go:141] libmachine: (ha-080809)     <boot dev='hd'/>
	I0531 18:40:49.852911   33166 main.go:141] libmachine: (ha-080809)     <bootmenu enable='no'/>
	I0531 18:40:49.852918   33166 main.go:141] libmachine: (ha-080809)   </os>
	I0531 18:40:49.852923   33166 main.go:141] libmachine: (ha-080809)   <devices>
	I0531 18:40:49.852930   33166 main.go:141] libmachine: (ha-080809)     <disk type='file' device='cdrom'>
	I0531 18:40:49.852943   33166 main.go:141] libmachine: (ha-080809)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/boot2docker.iso'/>
	I0531 18:40:49.852986   33166 main.go:141] libmachine: (ha-080809)       <target dev='hdc' bus='scsi'/>
	I0531 18:40:49.853001   33166 main.go:141] libmachine: (ha-080809)       <readonly/>
	I0531 18:40:49.853009   33166 main.go:141] libmachine: (ha-080809)     </disk>
	I0531 18:40:49.853015   33166 main.go:141] libmachine: (ha-080809)     <disk type='file' device='disk'>
	I0531 18:40:49.853026   33166 main.go:141] libmachine: (ha-080809)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 18:40:49.853037   33166 main.go:141] libmachine: (ha-080809)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/ha-080809.rawdisk'/>
	I0531 18:40:49.853053   33166 main.go:141] libmachine: (ha-080809)       <target dev='hda' bus='virtio'/>
	I0531 18:40:49.853063   33166 main.go:141] libmachine: (ha-080809)     </disk>
	I0531 18:40:49.853072   33166 main.go:141] libmachine: (ha-080809)     <interface type='network'>
	I0531 18:40:49.853089   33166 main.go:141] libmachine: (ha-080809)       <source network='mk-ha-080809'/>
	I0531 18:40:49.853101   33166 main.go:141] libmachine: (ha-080809)       <model type='virtio'/>
	I0531 18:40:49.853111   33166 main.go:141] libmachine: (ha-080809)     </interface>
	I0531 18:40:49.853121   33166 main.go:141] libmachine: (ha-080809)     <interface type='network'>
	I0531 18:40:49.853133   33166 main.go:141] libmachine: (ha-080809)       <source network='default'/>
	I0531 18:40:49.853150   33166 main.go:141] libmachine: (ha-080809)       <model type='virtio'/>
	I0531 18:40:49.853170   33166 main.go:141] libmachine: (ha-080809)     </interface>
	I0531 18:40:49.853202   33166 main.go:141] libmachine: (ha-080809)     <serial type='pty'>
	I0531 18:40:49.853215   33166 main.go:141] libmachine: (ha-080809)       <target port='0'/>
	I0531 18:40:49.853221   33166 main.go:141] libmachine: (ha-080809)     </serial>
	I0531 18:40:49.853225   33166 main.go:141] libmachine: (ha-080809)     <console type='pty'>
	I0531 18:40:49.853234   33166 main.go:141] libmachine: (ha-080809)       <target type='serial' port='0'/>
	I0531 18:40:49.853241   33166 main.go:141] libmachine: (ha-080809)     </console>
	I0531 18:40:49.853249   33166 main.go:141] libmachine: (ha-080809)     <rng model='virtio'>
	I0531 18:40:49.853253   33166 main.go:141] libmachine: (ha-080809)       <backend model='random'>/dev/random</backend>
	I0531 18:40:49.853261   33166 main.go:141] libmachine: (ha-080809)     </rng>
	I0531 18:40:49.853264   33166 main.go:141] libmachine: (ha-080809)     
	I0531 18:40:49.853271   33166 main.go:141] libmachine: (ha-080809)     
	I0531 18:40:49.853275   33166 main.go:141] libmachine: (ha-080809)   </devices>
	I0531 18:40:49.853280   33166 main.go:141] libmachine: (ha-080809) </domain>
	I0531 18:40:49.853286   33166 main.go:141] libmachine: (ha-080809) 
	I0531 18:40:49.857875   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:d7:ce:f9 in network default
	I0531 18:40:49.859513   33166 main.go:141] libmachine: (ha-080809) Ensuring networks are active...
	I0531 18:40:49.859536   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:49.860192   33166 main.go:141] libmachine: (ha-080809) Ensuring network default is active
	I0531 18:40:49.860453   33166 main.go:141] libmachine: (ha-080809) Ensuring network mk-ha-080809 is active
	I0531 18:40:49.860931   33166 main.go:141] libmachine: (ha-080809) Getting domain xml...
	I0531 18:40:49.861594   33166 main.go:141] libmachine: (ha-080809) Creating domain...
	I0531 18:40:51.021353   33166 main.go:141] libmachine: (ha-080809) Waiting to get IP...
	I0531 18:40:51.022022   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:51.022429   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:51.022493   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:51.022423   33189 retry.go:31] will retry after 288.437015ms: waiting for machine to come up
	I0531 18:40:51.312848   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:51.313269   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:51.313303   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:51.313217   33189 retry.go:31] will retry after 272.356627ms: waiting for machine to come up
	I0531 18:40:51.587652   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:51.588060   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:51.588085   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:51.588022   33189 retry.go:31] will retry after 405.324063ms: waiting for machine to come up
	I0531 18:40:51.994525   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:51.995002   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:51.995030   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:51.994946   33189 retry.go:31] will retry after 448.884474ms: waiting for machine to come up
	I0531 18:40:52.445564   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:52.445963   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:52.445988   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:52.445932   33189 retry.go:31] will retry after 623.606026ms: waiting for machine to come up
	I0531 18:40:53.071239   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:53.071675   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:53.071696   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:53.071644   33189 retry.go:31] will retry after 831.335673ms: waiting for machine to come up
	I0531 18:40:53.904088   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:53.904491   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:53.904519   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:53.904464   33189 retry.go:31] will retry after 762.101756ms: waiting for machine to come up
	I0531 18:40:54.667921   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:54.668282   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:54.668310   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:54.668240   33189 retry.go:31] will retry after 1.06068241s: waiting for machine to come up
	I0531 18:40:55.730577   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:55.730948   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:55.730970   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:55.730927   33189 retry.go:31] will retry after 1.286483761s: waiting for machine to come up
	I0531 18:40:57.019251   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:57.019609   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:57.019635   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:57.019565   33189 retry.go:31] will retry after 1.611518766s: waiting for machine to come up
	I0531 18:40:58.633172   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:58.633575   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:58.633603   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:58.633520   33189 retry.go:31] will retry after 2.543412139s: waiting for machine to come up
	I0531 18:41:01.179059   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:01.179472   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:41:01.179520   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:41:01.179442   33189 retry.go:31] will retry after 3.022349633s: waiting for machine to come up
	I0531 18:41:04.202883   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:04.203239   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:41:04.203270   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:41:04.203205   33189 retry.go:31] will retry after 4.176601091s: waiting for machine to come up
	I0531 18:41:08.383778   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:08.384145   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:41:08.384166   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:41:08.384109   33189 retry.go:31] will retry after 3.418828121s: waiting for machine to come up
	I0531 18:41:11.805614   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:11.806024   33166 main.go:141] libmachine: (ha-080809) Found IP for machine: 192.168.39.2
	I0531 18:41:11.806044   33166 main.go:141] libmachine: (ha-080809) Reserving static IP address...
	I0531 18:41:11.806058   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has current primary IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:11.806394   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find host DHCP lease matching {name: "ha-080809", mac: "52:54:00:07:26:4d", ip: "192.168.39.2"} in network mk-ha-080809
	I0531 18:41:11.874853   33166 main.go:141] libmachine: (ha-080809) DBG | Getting to WaitForSSH function...
	I0531 18:41:11.874885   33166 main.go:141] libmachine: (ha-080809) Reserved static IP address: 192.168.39.2
	I0531 18:41:11.874898   33166 main.go:141] libmachine: (ha-080809) Waiting for SSH to be available...
	I0531 18:41:11.877644   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:11.878001   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809
	I0531 18:41:11.878025   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find defined IP address of network mk-ha-080809 interface with MAC address 52:54:00:07:26:4d
	I0531 18:41:11.878180   33166 main.go:141] libmachine: (ha-080809) DBG | Using SSH client type: external
	I0531 18:41:11.878218   33166 main.go:141] libmachine: (ha-080809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa (-rw-------)
	I0531 18:41:11.878249   33166 main.go:141] libmachine: (ha-080809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:41:11.878259   33166 main.go:141] libmachine: (ha-080809) DBG | About to run SSH command:
	I0531 18:41:11.878267   33166 main.go:141] libmachine: (ha-080809) DBG | exit 0
	I0531 18:41:11.881625   33166 main.go:141] libmachine: (ha-080809) DBG | SSH cmd err, output: exit status 255: 
	I0531 18:41:11.881645   33166 main.go:141] libmachine: (ha-080809) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0531 18:41:11.881654   33166 main.go:141] libmachine: (ha-080809) DBG | command : exit 0
	I0531 18:41:11.881661   33166 main.go:141] libmachine: (ha-080809) DBG | err     : exit status 255
	I0531 18:41:11.881672   33166 main.go:141] libmachine: (ha-080809) DBG | output  : 
	I0531 18:41:14.882404   33166 main.go:141] libmachine: (ha-080809) DBG | Getting to WaitForSSH function...
	I0531 18:41:14.884819   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:14.885219   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:14.885251   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:14.885329   33166 main.go:141] libmachine: (ha-080809) DBG | Using SSH client type: external
	I0531 18:41:14.885350   33166 main.go:141] libmachine: (ha-080809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa (-rw-------)
	I0531 18:41:14.885405   33166 main.go:141] libmachine: (ha-080809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:41:14.885421   33166 main.go:141] libmachine: (ha-080809) DBG | About to run SSH command:
	I0531 18:41:14.885432   33166 main.go:141] libmachine: (ha-080809) DBG | exit 0
	I0531 18:41:15.008913   33166 main.go:141] libmachine: (ha-080809) DBG | SSH cmd err, output: <nil>: 
	I0531 18:41:15.009184   33166 main.go:141] libmachine: (ha-080809) KVM machine creation complete!
	I0531 18:41:15.009497   33166 main.go:141] libmachine: (ha-080809) Calling .GetConfigRaw
	I0531 18:41:15.009988   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:15.010198   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:15.010337   33166 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 18:41:15.010363   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:41:15.011520   33166 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 18:41:15.011538   33166 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 18:41:15.011554   33166 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 18:41:15.011561   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.013441   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.013758   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.013785   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.013914   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.014118   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.014258   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.014397   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.014540   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.014759   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.014770   33166 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 18:41:15.120475   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:41:15.120497   33166 main.go:141] libmachine: Detecting the provisioner...
	I0531 18:41:15.120504   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.122856   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.123267   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.123297   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.123444   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.123687   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.123842   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.123963   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.124187   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.124400   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.124416   33166 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 18:41:15.229853   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 18:41:15.229955   33166 main.go:141] libmachine: found compatible host: buildroot
	I0531 18:41:15.229971   33166 main.go:141] libmachine: Provisioning with buildroot...
	I0531 18:41:15.229981   33166 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:41:15.230254   33166 buildroot.go:166] provisioning hostname "ha-080809"
	I0531 18:41:15.230276   33166 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:41:15.230455   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.232970   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.233528   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.233558   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.233678   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.233852   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.234033   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.234160   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.234350   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.234506   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.234519   33166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809 && echo "ha-080809" | sudo tee /etc/hostname
	I0531 18:41:15.355446   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:41:15.355476   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.358146   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.358430   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.358461   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.358736   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.358918   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.359114   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.359262   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.359428   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.359641   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.359666   33166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:41:15.474279   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:41:15.474324   33166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:41:15.474380   33166 buildroot.go:174] setting up certificates
	I0531 18:41:15.474398   33166 provision.go:84] configureAuth start
	I0531 18:41:15.474420   33166 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:41:15.474714   33166 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:41:15.477261   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.477587   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.477607   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.477767   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.479629   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.479967   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.479994   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.480087   33166 provision.go:143] copyHostCerts
	I0531 18:41:15.480115   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:41:15.480149   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:41:15.480156   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:41:15.480226   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:41:15.480318   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:41:15.480338   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:41:15.480343   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:41:15.480370   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:41:15.480430   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:41:15.480447   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:41:15.480451   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:41:15.480471   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:41:15.480527   33166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809 san=[127.0.0.1 192.168.39.2 ha-080809 localhost minikube]
	I0531 18:41:15.811140   33166 provision.go:177] copyRemoteCerts
	I0531 18:41:15.811195   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:41:15.811218   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.813750   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.814156   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.814188   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.814340   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.814546   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.814692   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.814795   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:15.900076   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:41:15.900150   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:41:15.924042   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:41:15.924110   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0531 18:41:15.946872   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:41:15.946935   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:41:15.969512   33166 provision.go:87] duration metric: took 495.099265ms to configureAuth
	I0531 18:41:15.969542   33166 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:41:15.969745   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:41:15.969815   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.972184   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.972480   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.972504   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.972854   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.973061   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.973228   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.973362   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.973500   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.973669   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.973689   33166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:41:16.234747   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:41:16.234779   33166 main.go:141] libmachine: Checking connection to Docker...
	I0531 18:41:16.234789   33166 main.go:141] libmachine: (ha-080809) Calling .GetURL
	I0531 18:41:16.235971   33166 main.go:141] libmachine: (ha-080809) DBG | Using libvirt version 6000000
	I0531 18:41:16.237976   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.238288   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.238312   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.238431   33166 main.go:141] libmachine: Docker is up and running!
	I0531 18:41:16.238447   33166 main.go:141] libmachine: Reticulating splines...
	I0531 18:41:16.238465   33166 client.go:171] duration metric: took 26.836260223s to LocalClient.Create
	I0531 18:41:16.238491   33166 start.go:167] duration metric: took 26.836327274s to libmachine.API.Create "ha-080809"
	I0531 18:41:16.238503   33166 start.go:293] postStartSetup for "ha-080809" (driver="kvm2")
	I0531 18:41:16.238515   33166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:41:16.238536   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.238740   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:41:16.238763   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:16.240487   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.240792   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.240817   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.240931   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:16.241109   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.241261   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:16.241392   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:16.323371   33166 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:41:16.327800   33166 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:41:16.327821   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:41:16.327876   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:41:16.327941   33166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:41:16.327951   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:41:16.328037   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:41:16.337706   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:41:16.362009   33166 start.go:296] duration metric: took 123.494571ms for postStartSetup
	I0531 18:41:16.362071   33166 main.go:141] libmachine: (ha-080809) Calling .GetConfigRaw
	I0531 18:41:16.362684   33166 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:41:16.365319   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.365749   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.365785   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.365931   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:41:16.366117   33166 start.go:128] duration metric: took 26.980548012s to createHost
	I0531 18:41:16.366141   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:16.368224   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.368499   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.368530   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.368652   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:16.368812   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.368986   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.369162   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:16.369319   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:16.369503   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:16.369516   33166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:41:16.473752   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717180876.456803167
	
	I0531 18:41:16.473771   33166 fix.go:216] guest clock: 1717180876.456803167
	I0531 18:41:16.473778   33166 fix.go:229] Guest: 2024-05-31 18:41:16.456803167 +0000 UTC Remote: 2024-05-31 18:41:16.366129714 +0000 UTC m=+27.078403575 (delta=90.673453ms)
	I0531 18:41:16.473795   33166 fix.go:200] guest clock delta is within tolerance: 90.673453ms
	I0531 18:41:16.473799   33166 start.go:83] releasing machines lock for "ha-080809", held for 27.088306457s
	I0531 18:41:16.473814   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.474049   33166 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:41:16.476439   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.476737   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.476763   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.476880   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.477418   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.477588   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.477664   33166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:41:16.477716   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:16.477811   33166 ssh_runner.go:195] Run: cat /version.json
	I0531 18:41:16.477831   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:16.480276   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.480519   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.480663   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.480691   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.480818   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:16.481003   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.481015   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.481038   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.481178   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:16.481203   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:16.481461   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:16.481522   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.481658   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:16.481836   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:16.558388   33166 ssh_runner.go:195] Run: systemctl --version
	I0531 18:41:16.581555   33166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:41:16.735341   33166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:41:16.742169   33166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:41:16.742241   33166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:41:16.758883   33166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 18:41:16.758913   33166 start.go:494] detecting cgroup driver to use...
	I0531 18:41:16.758977   33166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:41:16.775246   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:41:16.789674   33166 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:41:16.789732   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:41:16.803365   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:41:16.817683   33166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:41:16.929142   33166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:41:17.084971   33166 docker.go:233] disabling docker service ...
	I0531 18:41:17.085034   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:41:17.099168   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:41:17.112314   33166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:41:17.234520   33166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:41:17.353552   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:41:17.368462   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:41:17.386660   33166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:41:17.386733   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.397037   33166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:41:17.397097   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.407433   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.418037   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.428265   33166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:41:17.438851   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.448764   33166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.465560   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.475446   33166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:41:17.484144   33166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 18:41:17.484190   33166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 18:41:17.496427   33166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:41:17.505215   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:41:17.612656   33166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:41:17.744045   33166 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:41:17.744113   33166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:41:17.749941   33166 start.go:562] Will wait 60s for crictl version
	I0531 18:41:17.749999   33166 ssh_runner.go:195] Run: which crictl
	I0531 18:41:17.753924   33166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:41:17.794692   33166 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:41:17.794762   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:41:17.822710   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:41:17.854359   33166 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:41:17.855654   33166 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:41:17.858089   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:17.858369   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:17.858390   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:17.858627   33166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:41:17.862940   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:41:17.876330   33166 kubeadm.go:877] updating cluster {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 18:41:17.876427   33166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:41:17.876467   33166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:41:17.908946   33166 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0531 18:41:17.909038   33166 ssh_runner.go:195] Run: which lz4
	I0531 18:41:17.913015   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0531 18:41:17.913099   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0531 18:41:17.917319   33166 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0531 18:41:17.917350   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0531 18:41:19.350031   33166 crio.go:462] duration metric: took 1.436953009s to copy over tarball
	I0531 18:41:19.350090   33166 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0531 18:41:21.442460   33166 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.092338648s)
	I0531 18:41:21.442485   33166 crio.go:469] duration metric: took 2.092431012s to extract the tarball
	I0531 18:41:21.442493   33166 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0531 18:41:21.483381   33166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:41:21.527201   33166 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:41:21.527219   33166 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:41:21.527225   33166 kubeadm.go:928] updating node { 192.168.39.2 8443 v1.30.1 crio true true} ...
	I0531 18:41:21.527310   33166 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:41:21.527370   33166 ssh_runner.go:195] Run: crio config
	I0531 18:41:21.578632   33166 cni.go:84] Creating CNI manager for ""
	I0531 18:41:21.578651   33166 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0531 18:41:21.578659   33166 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 18:41:21.578680   33166 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-080809 NodeName:ha-080809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:41:21.578811   33166 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-080809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:41:21.578832   33166 kube-vip.go:115] generating kube-vip config ...
	I0531 18:41:21.578870   33166 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:41:21.596044   33166 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:41:21.596186   33166 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:41:21.596245   33166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:41:21.606538   33166 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:41:21.606614   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0531 18:41:21.616171   33166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0531 18:41:21.632817   33166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:41:21.649183   33166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0531 18:41:21.665287   33166 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0531 18:41:21.681277   33166 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:41:21.685130   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:41:21.697282   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:41:21.813833   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:41:21.831829   33166 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.2
	I0531 18:41:21.831855   33166 certs.go:194] generating shared ca certs ...
	I0531 18:41:21.831890   33166 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:21.832080   33166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:41:21.832137   33166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:41:21.832150   33166 certs.go:256] generating profile certs ...
	I0531 18:41:21.832228   33166 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:41:21.832247   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt with IP's: []
	I0531 18:41:21.960416   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt ...
	I0531 18:41:21.960441   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt: {Name:mkbb260ac793c3c3e537e9a7d865f3fd8518820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:21.960604   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key ...
	I0531 18:41:21.960614   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key: {Name:mk77b95ba2c89b4a3593bb327817533591115dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:21.960684   33166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.fe027851
	I0531 18:41:21.960697   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.fe027851 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.254]
	I0531 18:41:22.023685   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.fe027851 ...
	I0531 18:41:22.023714   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.fe027851: {Name:mkf3e98cf3f75fbca228e9135ae86bb53400c854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:22.023884   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.fe027851 ...
	I0531 18:41:22.023899   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.fe027851: {Name:mka3a76e786ac918919a584640713586090b6259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:22.023999   33166 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.fe027851 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:41:22.024079   33166 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.fe027851 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:41:22.024131   33166 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:41:22.024145   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt with IP's: []
	I0531 18:41:22.207623   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt ...
	I0531 18:41:22.207649   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt: {Name:mk9e3dcf53c1b719a3cc7796af13396f6fa7562e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:22.207804   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key ...
	I0531 18:41:22.207820   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key: {Name:mka4197e6978032a95184f85d2aa333ada018755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:22.207923   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:41:22.207947   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:41:22.207964   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:41:22.207982   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:41:22.207999   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:41:22.208017   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:41:22.208035   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:41:22.208052   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:41:22.208118   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:41:22.208166   33166 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:41:22.208182   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:41:22.208211   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:41:22.208240   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:41:22.208271   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:41:22.208322   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:41:22.208359   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:41:22.208375   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:41:22.208392   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:41:22.208935   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:41:22.234417   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:41:22.257894   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:41:22.280749   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:41:22.303828   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0531 18:41:22.328316   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 18:41:22.352631   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:41:22.376753   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:41:22.401355   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:41:22.425203   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:41:22.448204   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:41:22.471183   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:41:22.487467   33166 ssh_runner.go:195] Run: openssl version
	I0531 18:41:22.493236   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:41:22.504146   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:41:22.508713   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:41:22.508763   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:41:22.514472   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:41:22.525306   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:41:22.536115   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:41:22.540512   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:41:22.540545   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:41:22.546176   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:41:22.557406   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:41:22.569239   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:41:22.574119   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:41:22.574170   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:41:22.580157   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:41:22.592323   33166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:41:22.596787   33166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 18:41:22.596853   33166 kubeadm.go:391] StartCluster: {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:41:22.596923   33166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:41:22.597004   33166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:41:22.645080   33166 cri.go:89] found id: ""
	I0531 18:41:22.645153   33166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:41:22.658057   33166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:41:22.670288   33166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:41:22.682617   33166 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:41:22.682646   33166 kubeadm.go:156] found existing configuration files:
	
	I0531 18:41:22.682689   33166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:41:22.693401   33166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0531 18:41:22.693455   33166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0531 18:41:22.706085   33166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:41:22.715313   33166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0531 18:41:22.715356   33166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0531 18:41:22.724902   33166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:41:22.734468   33166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0531 18:41:22.734516   33166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:41:22.744080   33166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:41:22.753365   33166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0531 18:41:22.753424   33166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:41:22.762995   33166 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0531 18:41:23.013907   33166 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 18:41:34.048356   33166 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0531 18:41:34.048424   33166 kubeadm.go:309] [preflight] Running pre-flight checks
	I0531 18:41:34.048512   33166 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 18:41:34.048626   33166 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 18:41:34.048762   33166 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 18:41:34.048869   33166 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 18:41:34.050299   33166 out.go:204]   - Generating certificates and keys ...
	I0531 18:41:34.050371   33166 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0531 18:41:34.050439   33166 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0531 18:41:34.050546   33166 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 18:41:34.050626   33166 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0531 18:41:34.050714   33166 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0531 18:41:34.050798   33166 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0531 18:41:34.050876   33166 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0531 18:41:34.050999   33166 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-080809 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0531 18:41:34.051072   33166 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0531 18:41:34.051197   33166 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-080809 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0531 18:41:34.051295   33166 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 18:41:34.051378   33166 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 18:41:34.051440   33166 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0531 18:41:34.051514   33166 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 18:41:34.051576   33166 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 18:41:34.051675   33166 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0531 18:41:34.051764   33166 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 18:41:34.051864   33166 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 18:41:34.051952   33166 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 18:41:34.052053   33166 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 18:41:34.052144   33166 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 18:41:34.053382   33166 out.go:204]   - Booting up control plane ...
	I0531 18:41:34.053464   33166 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 18:41:34.053539   33166 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 18:41:34.053612   33166 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 18:41:34.053719   33166 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 18:41:34.053844   33166 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 18:41:34.053915   33166 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0531 18:41:34.054043   33166 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0531 18:41:34.054140   33166 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0531 18:41:34.054228   33166 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.812743ms
	I0531 18:41:34.054326   33166 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0531 18:41:34.054422   33166 kubeadm.go:309] [api-check] The API server is healthy after 5.991381448s
	I0531 18:41:34.054579   33166 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 18:41:34.054693   33166 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 18:41:34.054772   33166 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 18:41:34.054942   33166 kubeadm.go:309] [mark-control-plane] Marking the node ha-080809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 18:41:34.055017   33166 kubeadm.go:309] [bootstrap-token] Using token: 08gm8l.f2qrzvv9b1zsgyhw
	I0531 18:41:34.056257   33166 out.go:204]   - Configuring RBAC rules ...
	I0531 18:41:34.056364   33166 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 18:41:34.056448   33166 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 18:41:34.056584   33166 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 18:41:34.056717   33166 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 18:41:34.056843   33166 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 18:41:34.056942   33166 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 18:41:34.057078   33166 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 18:41:34.057147   33166 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0531 18:41:34.057220   33166 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0531 18:41:34.057234   33166 kubeadm.go:309] 
	I0531 18:41:34.057282   33166 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0531 18:41:34.057289   33166 kubeadm.go:309] 
	I0531 18:41:34.057354   33166 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0531 18:41:34.057360   33166 kubeadm.go:309] 
	I0531 18:41:34.057395   33166 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0531 18:41:34.057444   33166 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 18:41:34.057486   33166 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 18:41:34.057495   33166 kubeadm.go:309] 
	I0531 18:41:34.057546   33166 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0531 18:41:34.057552   33166 kubeadm.go:309] 
	I0531 18:41:34.057603   33166 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 18:41:34.057615   33166 kubeadm.go:309] 
	I0531 18:41:34.057660   33166 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0531 18:41:34.057727   33166 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 18:41:34.057788   33166 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 18:41:34.057796   33166 kubeadm.go:309] 
	I0531 18:41:34.057870   33166 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 18:41:34.057936   33166 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0531 18:41:34.057947   33166 kubeadm.go:309] 
	I0531 18:41:34.058052   33166 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 08gm8l.f2qrzvv9b1zsgyhw \
	I0531 18:41:34.058141   33166 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 \
	I0531 18:41:34.058161   33166 kubeadm.go:309] 	--control-plane 
	I0531 18:41:34.058167   33166 kubeadm.go:309] 
	I0531 18:41:34.058245   33166 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0531 18:41:34.058251   33166 kubeadm.go:309] 
	I0531 18:41:34.058344   33166 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 08gm8l.f2qrzvv9b1zsgyhw \
	I0531 18:41:34.058471   33166 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 
	I0531 18:41:34.058483   33166 cni.go:84] Creating CNI manager for ""
	I0531 18:41:34.058488   33166 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0531 18:41:34.059834   33166 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:41:34.061026   33166 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:41:34.066612   33166 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0531 18:41:34.066626   33166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 18:41:34.084600   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:41:34.432458   33166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:41:34.432539   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:34.432596   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-080809 minikube.k8s.io/updated_at=2024_05_31T18_41_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404 minikube.k8s.io/name=ha-080809 minikube.k8s.io/primary=true
	I0531 18:41:34.647235   33166 ops.go:34] apiserver oom_adj: -16
	I0531 18:41:34.648243   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:35.149072   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:35.648902   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:36.148558   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:36.648301   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:37.149179   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:37.648615   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:38.149220   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:38.648981   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:39.149074   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:39.648859   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:40.149141   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:40.648381   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:41.148640   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:41.648259   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:42.149154   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:42.648390   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:43.148383   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:43.648814   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:44.148873   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:44.648660   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:45.149112   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:45.648543   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:46.148849   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:46.648488   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:47.149158   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:47.243429   33166 kubeadm.go:1107] duration metric: took 12.810950463s to wait for elevateKubeSystemPrivileges
	W0531 18:41:47.243471   33166 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0531 18:41:47.243478   33166 kubeadm.go:393] duration metric: took 24.646633116s to StartCluster
	I0531 18:41:47.243493   33166 settings.go:142] acquiring lock: {Name:mkf1adeb57ad3d510bae126cde364cbda2de4f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:47.243576   33166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:41:47.244432   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/kubeconfig: {Name:mk4025a54f7d507177124d778f591e88161dcba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:47.244678   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:41:47.244709   33166 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:41:47.244731   33166 start.go:240] waiting for startup goroutines ...
	I0531 18:41:47.244743   33166 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0531 18:41:47.244805   33166 addons.go:69] Setting storage-provisioner=true in profile "ha-080809"
	I0531 18:41:47.244819   33166 addons.go:69] Setting default-storageclass=true in profile "ha-080809"
	I0531 18:41:47.244834   33166 addons.go:234] Setting addon storage-provisioner=true in "ha-080809"
	I0531 18:41:47.244858   33166 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-080809"
	I0531 18:41:47.244862   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:41:47.244934   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:41:47.245270   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.245297   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.245347   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.245396   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.260442   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I0531 18:41:47.260520   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34667
	I0531 18:41:47.260930   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.260998   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.261443   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.261458   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.261586   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.261607   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.261762   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.261876   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.262083   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:41:47.262341   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.262371   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.264404   33166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:41:47.264758   33166 kapi.go:59] client config for ha-080809: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt", KeyFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key", CAFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 18:41:47.265376   33166 cert_rotation.go:137] Starting client certificate rotation controller
	I0531 18:41:47.265597   33166 addons.go:234] Setting addon default-storageclass=true in "ha-080809"
	I0531 18:41:47.265638   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:41:47.265999   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.266028   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.276877   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0531 18:41:47.277307   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.277735   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.277760   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.278055   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.278239   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:41:47.279741   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:47.281787   33166 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:41:47.280229   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34513
	I0531 18:41:47.283158   33166 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:41:47.283178   33166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:41:47.283200   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:47.283555   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.284131   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.284147   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.284574   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.285144   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.285169   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.286111   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:47.286529   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:47.286549   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:47.286796   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:47.286946   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:47.287100   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:47.287222   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:47.299067   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I0531 18:41:47.299371   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.299879   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.299902   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.300264   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.300452   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:41:47.301739   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:47.302065   33166 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:41:47.302082   33166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:41:47.302108   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:47.304732   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:47.305154   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:47.305185   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:47.305327   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:47.305517   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:47.305672   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:47.305806   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:47.348574   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:41:47.457360   33166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:41:47.565378   33166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:41:47.886036   33166 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0531 18:41:48.152663   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.152686   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.152829   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.152844   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.153063   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.153081   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.153091   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.153098   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.153168   33166 main.go:141] libmachine: (ha-080809) DBG | Closing plugin on server side
	I0531 18:41:48.153169   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.153220   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.153231   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.153241   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.153323   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.153334   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.153435   33166 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0531 18:41:48.153442   33166 round_trippers.go:469] Request Headers:
	I0531 18:41:48.153453   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:41:48.153468   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:41:48.153637   33166 main.go:141] libmachine: (ha-080809) DBG | Closing plugin on server side
	I0531 18:41:48.153642   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.153663   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.164286   33166 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0531 18:41:48.165069   33166 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0531 18:41:48.165085   33166 round_trippers.go:469] Request Headers:
	I0531 18:41:48.165095   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:41:48.165099   33166 round_trippers.go:473]     Content-Type: application/json
	I0531 18:41:48.165103   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:41:48.168547   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:41:48.168680   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.168701   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.169047   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.169070   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.170791   33166 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 18:41:48.169052   33166 main.go:141] libmachine: (ha-080809) DBG | Closing plugin on server side
	I0531 18:41:48.172057   33166 addons.go:510] duration metric: took 927.311258ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0531 18:41:48.172092   33166 start.go:245] waiting for cluster config update ...
	I0531 18:41:48.172109   33166 start.go:254] writing updated cluster config ...
	I0531 18:41:48.173752   33166 out.go:177] 
	I0531 18:41:48.175529   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:41:48.175621   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:41:48.177171   33166 out.go:177] * Starting "ha-080809-m02" control-plane node in "ha-080809" cluster
	I0531 18:41:48.178578   33166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:41:48.178603   33166 cache.go:56] Caching tarball of preloaded images
	I0531 18:41:48.178685   33166 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:41:48.178700   33166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:41:48.178789   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:41:48.179425   33166 start.go:360] acquireMachinesLock for ha-080809-m02: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:41:48.179512   33166 start.go:364] duration metric: took 30.457µs to acquireMachinesLock for "ha-080809-m02"
	I0531 18:41:48.179537   33166 start.go:93] Provisioning new machine with config: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:41:48.179618   33166 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0531 18:41:48.181016   33166 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0531 18:41:48.181087   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:48.181108   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:48.195758   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0531 18:41:48.196241   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:48.196680   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:48.196708   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:48.197109   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:48.197274   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetMachineName
	I0531 18:41:48.197449   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:41:48.197604   33166 start.go:159] libmachine.API.Create for "ha-080809" (driver="kvm2")
	I0531 18:41:48.197627   33166 client.go:168] LocalClient.Create starting
	I0531 18:41:48.197660   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 18:41:48.197700   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:41:48.197734   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:41:48.197805   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 18:41:48.197831   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:41:48.197852   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:41:48.197883   33166 main.go:141] libmachine: Running pre-create checks...
	I0531 18:41:48.197897   33166 main.go:141] libmachine: (ha-080809-m02) Calling .PreCreateCheck
	I0531 18:41:48.198069   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetConfigRaw
	I0531 18:41:48.198434   33166 main.go:141] libmachine: Creating machine...
	I0531 18:41:48.198445   33166 main.go:141] libmachine: (ha-080809-m02) Calling .Create
	I0531 18:41:48.198569   33166 main.go:141] libmachine: (ha-080809-m02) Creating KVM machine...
	I0531 18:41:48.199911   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found existing default KVM network
	I0531 18:41:48.200038   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found existing private KVM network mk-ha-080809
	I0531 18:41:48.200157   33166 main.go:141] libmachine: (ha-080809-m02) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02 ...
	I0531 18:41:48.200198   33166 main.go:141] libmachine: (ha-080809-m02) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 18:41:48.200249   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:48.200161   33574 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:41:48.200379   33166 main.go:141] libmachine: (ha-080809-m02) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 18:41:48.411679   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:48.411560   33574 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa...
	I0531 18:41:48.922794   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:48.922687   33574 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/ha-080809-m02.rawdisk...
	I0531 18:41:48.922829   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Writing magic tar header
	I0531 18:41:48.922842   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Writing SSH key tar header
	I0531 18:41:48.922855   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:48.922789   33574 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02 ...
	I0531 18:41:48.922888   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02
	I0531 18:41:48.922912   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 18:41:48.922947   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:41:48.922959   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02 (perms=drwx------)
	I0531 18:41:48.922968   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 18:41:48.922976   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 18:41:48.922986   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 18:41:48.923002   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 18:41:48.923013   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 18:41:48.923026   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins
	I0531 18:41:48.923037   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home
	I0531 18:41:48.923064   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 18:41:48.923077   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 18:41:48.923081   33166 main.go:141] libmachine: (ha-080809-m02) Creating domain...
	I0531 18:41:48.923091   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Skipping /home - not owner
	I0531 18:41:48.923891   33166 main.go:141] libmachine: (ha-080809-m02) define libvirt domain using xml: 
	I0531 18:41:48.923913   33166 main.go:141] libmachine: (ha-080809-m02) <domain type='kvm'>
	I0531 18:41:48.923922   33166 main.go:141] libmachine: (ha-080809-m02)   <name>ha-080809-m02</name>
	I0531 18:41:48.923936   33166 main.go:141] libmachine: (ha-080809-m02)   <memory unit='MiB'>2200</memory>
	I0531 18:41:48.923948   33166 main.go:141] libmachine: (ha-080809-m02)   <vcpu>2</vcpu>
	I0531 18:41:48.923955   33166 main.go:141] libmachine: (ha-080809-m02)   <features>
	I0531 18:41:48.923964   33166 main.go:141] libmachine: (ha-080809-m02)     <acpi/>
	I0531 18:41:48.923975   33166 main.go:141] libmachine: (ha-080809-m02)     <apic/>
	I0531 18:41:48.923992   33166 main.go:141] libmachine: (ha-080809-m02)     <pae/>
	I0531 18:41:48.924000   33166 main.go:141] libmachine: (ha-080809-m02)     
	I0531 18:41:48.924010   33166 main.go:141] libmachine: (ha-080809-m02)   </features>
	I0531 18:41:48.924025   33166 main.go:141] libmachine: (ha-080809-m02)   <cpu mode='host-passthrough'>
	I0531 18:41:48.924037   33166 main.go:141] libmachine: (ha-080809-m02)   
	I0531 18:41:48.924047   33166 main.go:141] libmachine: (ha-080809-m02)   </cpu>
	I0531 18:41:48.924056   33166 main.go:141] libmachine: (ha-080809-m02)   <os>
	I0531 18:41:48.924068   33166 main.go:141] libmachine: (ha-080809-m02)     <type>hvm</type>
	I0531 18:41:48.924081   33166 main.go:141] libmachine: (ha-080809-m02)     <boot dev='cdrom'/>
	I0531 18:41:48.924096   33166 main.go:141] libmachine: (ha-080809-m02)     <boot dev='hd'/>
	I0531 18:41:48.924108   33166 main.go:141] libmachine: (ha-080809-m02)     <bootmenu enable='no'/>
	I0531 18:41:48.924119   33166 main.go:141] libmachine: (ha-080809-m02)   </os>
	I0531 18:41:48.924129   33166 main.go:141] libmachine: (ha-080809-m02)   <devices>
	I0531 18:41:48.924139   33166 main.go:141] libmachine: (ha-080809-m02)     <disk type='file' device='cdrom'>
	I0531 18:41:48.924153   33166 main.go:141] libmachine: (ha-080809-m02)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/boot2docker.iso'/>
	I0531 18:41:48.924169   33166 main.go:141] libmachine: (ha-080809-m02)       <target dev='hdc' bus='scsi'/>
	I0531 18:41:48.924194   33166 main.go:141] libmachine: (ha-080809-m02)       <readonly/>
	I0531 18:41:48.924205   33166 main.go:141] libmachine: (ha-080809-m02)     </disk>
	I0531 18:41:48.924216   33166 main.go:141] libmachine: (ha-080809-m02)     <disk type='file' device='disk'>
	I0531 18:41:48.924230   33166 main.go:141] libmachine: (ha-080809-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 18:41:48.924259   33166 main.go:141] libmachine: (ha-080809-m02)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/ha-080809-m02.rawdisk'/>
	I0531 18:41:48.924284   33166 main.go:141] libmachine: (ha-080809-m02)       <target dev='hda' bus='virtio'/>
	I0531 18:41:48.924298   33166 main.go:141] libmachine: (ha-080809-m02)     </disk>
	I0531 18:41:48.924312   33166 main.go:141] libmachine: (ha-080809-m02)     <interface type='network'>
	I0531 18:41:48.924336   33166 main.go:141] libmachine: (ha-080809-m02)       <source network='mk-ha-080809'/>
	I0531 18:41:48.924346   33166 main.go:141] libmachine: (ha-080809-m02)       <model type='virtio'/>
	I0531 18:41:48.924355   33166 main.go:141] libmachine: (ha-080809-m02)     </interface>
	I0531 18:41:48.924372   33166 main.go:141] libmachine: (ha-080809-m02)     <interface type='network'>
	I0531 18:41:48.924394   33166 main.go:141] libmachine: (ha-080809-m02)       <source network='default'/>
	I0531 18:41:48.924409   33166 main.go:141] libmachine: (ha-080809-m02)       <model type='virtio'/>
	I0531 18:41:48.924419   33166 main.go:141] libmachine: (ha-080809-m02)     </interface>
	I0531 18:41:48.924429   33166 main.go:141] libmachine: (ha-080809-m02)     <serial type='pty'>
	I0531 18:41:48.924438   33166 main.go:141] libmachine: (ha-080809-m02)       <target port='0'/>
	I0531 18:41:48.924446   33166 main.go:141] libmachine: (ha-080809-m02)     </serial>
	I0531 18:41:48.924452   33166 main.go:141] libmachine: (ha-080809-m02)     <console type='pty'>
	I0531 18:41:48.924459   33166 main.go:141] libmachine: (ha-080809-m02)       <target type='serial' port='0'/>
	I0531 18:41:48.924466   33166 main.go:141] libmachine: (ha-080809-m02)     </console>
	I0531 18:41:48.924472   33166 main.go:141] libmachine: (ha-080809-m02)     <rng model='virtio'>
	I0531 18:41:48.924488   33166 main.go:141] libmachine: (ha-080809-m02)       <backend model='random'>/dev/random</backend>
	I0531 18:41:48.924499   33166 main.go:141] libmachine: (ha-080809-m02)     </rng>
	I0531 18:41:48.924507   33166 main.go:141] libmachine: (ha-080809-m02)     
	I0531 18:41:48.924517   33166 main.go:141] libmachine: (ha-080809-m02)     
	I0531 18:41:48.924525   33166 main.go:141] libmachine: (ha-080809-m02)   </devices>
	I0531 18:41:48.924534   33166 main.go:141] libmachine: (ha-080809-m02) </domain>
	I0531 18:41:48.924544   33166 main.go:141] libmachine: (ha-080809-m02) 
	I0531 18:41:48.930354   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:09:1b:f4 in network default
	I0531 18:41:48.930849   33166 main.go:141] libmachine: (ha-080809-m02) Ensuring networks are active...
	I0531 18:41:48.930884   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:48.931590   33166 main.go:141] libmachine: (ha-080809-m02) Ensuring network default is active
	I0531 18:41:48.931988   33166 main.go:141] libmachine: (ha-080809-m02) Ensuring network mk-ha-080809 is active
	I0531 18:41:48.932418   33166 main.go:141] libmachine: (ha-080809-m02) Getting domain xml...
	I0531 18:41:48.933230   33166 main.go:141] libmachine: (ha-080809-m02) Creating domain...
	I0531 18:41:50.098222   33166 main.go:141] libmachine: (ha-080809-m02) Waiting to get IP...
	I0531 18:41:50.098929   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:50.099311   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:50.099368   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:50.099298   33574 retry.go:31] will retry after 261.984814ms: waiting for machine to come up
	I0531 18:41:50.362946   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:50.363453   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:50.363487   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:50.363407   33574 retry.go:31] will retry after 282.991286ms: waiting for machine to come up
	I0531 18:41:50.647939   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:50.648489   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:50.648514   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:50.648433   33574 retry.go:31] will retry after 294.33331ms: waiting for machine to come up
	I0531 18:41:50.943773   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:50.944279   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:50.944304   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:50.944254   33574 retry.go:31] will retry after 450.31315ms: waiting for machine to come up
	I0531 18:41:51.395954   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:51.396412   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:51.396451   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:51.396368   33574 retry.go:31] will retry after 507.346302ms: waiting for machine to come up
	I0531 18:41:51.905205   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:51.905754   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:51.905774   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:51.905701   33574 retry.go:31] will retry after 715.723902ms: waiting for machine to come up
	I0531 18:41:52.623538   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:52.623889   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:52.623915   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:52.623859   33574 retry.go:31] will retry after 994.815511ms: waiting for machine to come up
	I0531 18:41:53.621021   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:53.621493   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:53.621527   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:53.621438   33574 retry.go:31] will retry after 1.011563252s: waiting for machine to come up
	I0531 18:41:54.634371   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:54.634848   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:54.634875   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:54.634824   33574 retry.go:31] will retry after 1.753029158s: waiting for machine to come up
	I0531 18:41:56.388945   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:56.389369   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:56.389395   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:56.389342   33574 retry.go:31] will retry after 1.744688574s: waiting for machine to come up
	I0531 18:41:58.135207   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:58.135711   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:58.135745   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:58.135653   33574 retry.go:31] will retry after 2.898419717s: waiting for machine to come up
	I0531 18:42:01.037050   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:01.037455   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:42:01.037472   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:42:01.037440   33574 retry.go:31] will retry after 2.809334691s: waiting for machine to come up
	I0531 18:42:03.848654   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:03.849111   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:42:03.849140   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:42:03.849055   33574 retry.go:31] will retry after 3.131942057s: waiting for machine to come up
	I0531 18:42:06.982387   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:06.982746   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:42:06.982776   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:42:06.982702   33574 retry.go:31] will retry after 5.344270754s: waiting for machine to come up
	I0531 18:42:12.331422   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:12.331856   33166 main.go:141] libmachine: (ha-080809-m02) Found IP for machine: 192.168.39.51
	I0531 18:42:12.331888   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has current primary IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:12.331920   33166 main.go:141] libmachine: (ha-080809-m02) Reserving static IP address...
	I0531 18:42:12.332240   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find host DHCP lease matching {name: "ha-080809-m02", mac: "52:54:00:38:66:ad", ip: "192.168.39.51"} in network mk-ha-080809
	I0531 18:42:12.403133   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Getting to WaitForSSH function...
	I0531 18:42:12.403164   33166 main.go:141] libmachine: (ha-080809-m02) Reserved static IP address: 192.168.39.51
	I0531 18:42:12.403175   33166 main.go:141] libmachine: (ha-080809-m02) Waiting for SSH to be available...
	I0531 18:42:12.405482   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:12.405817   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809
	I0531 18:42:12.405842   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find defined IP address of network mk-ha-080809 interface with MAC address 52:54:00:38:66:ad
	I0531 18:42:12.405975   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using SSH client type: external
	I0531 18:42:12.406000   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa (-rw-------)
	I0531 18:42:12.406029   33166 main.go:141] libmachine: (ha-080809-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:42:12.406048   33166 main.go:141] libmachine: (ha-080809-m02) DBG | About to run SSH command:
	I0531 18:42:12.406059   33166 main.go:141] libmachine: (ha-080809-m02) DBG | exit 0
	I0531 18:42:12.409696   33166 main.go:141] libmachine: (ha-080809-m02) DBG | SSH cmd err, output: exit status 255: 
	I0531 18:42:12.409717   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0531 18:42:12.409727   33166 main.go:141] libmachine: (ha-080809-m02) DBG | command : exit 0
	I0531 18:42:12.409780   33166 main.go:141] libmachine: (ha-080809-m02) DBG | err     : exit status 255
	I0531 18:42:12.409800   33166 main.go:141] libmachine: (ha-080809-m02) DBG | output  : 
	I0531 18:42:15.411838   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Getting to WaitForSSH function...
	I0531 18:42:15.414210   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.414631   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.414663   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.414786   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using SSH client type: external
	I0531 18:42:15.414811   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa (-rw-------)
	I0531 18:42:15.414836   33166 main.go:141] libmachine: (ha-080809-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:42:15.414852   33166 main.go:141] libmachine: (ha-080809-m02) DBG | About to run SSH command:
	I0531 18:42:15.414870   33166 main.go:141] libmachine: (ha-080809-m02) DBG | exit 0
	I0531 18:42:15.537201   33166 main.go:141] libmachine: (ha-080809-m02) DBG | SSH cmd err, output: <nil>: 
	I0531 18:42:15.537405   33166 main.go:141] libmachine: (ha-080809-m02) KVM machine creation complete!
	I0531 18:42:15.537722   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetConfigRaw
	I0531 18:42:15.538255   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:15.538434   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:15.538617   33166 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 18:42:15.538630   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:42:15.539803   33166 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 18:42:15.539822   33166 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 18:42:15.539830   33166 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 18:42:15.539839   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.541848   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.542129   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.542165   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.542303   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:15.542459   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.542590   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.542717   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:15.542880   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:15.543091   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:15.543103   33166 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 18:42:15.640427   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:42:15.640448   33166 main.go:141] libmachine: Detecting the provisioner...
	I0531 18:42:15.640455   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.642974   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.643355   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.643385   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.643512   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:15.643699   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.643853   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.644015   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:15.644189   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:15.644339   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:15.644349   33166 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 18:42:15.745863   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 18:42:15.745937   33166 main.go:141] libmachine: found compatible host: buildroot
	I0531 18:42:15.745951   33166 main.go:141] libmachine: Provisioning with buildroot...
	I0531 18:42:15.745963   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetMachineName
	I0531 18:42:15.746222   33166 buildroot.go:166] provisioning hostname "ha-080809-m02"
	I0531 18:42:15.746243   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetMachineName
	I0531 18:42:15.746434   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.748775   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.749092   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.749110   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.749302   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:15.749453   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.749615   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.749747   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:15.749888   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:15.750058   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:15.750069   33166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809-m02 && echo "ha-080809-m02" | sudo tee /etc/hostname
	I0531 18:42:15.866636   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809-m02
	
	I0531 18:42:15.866667   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.869486   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.869851   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.869879   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.870082   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:15.870264   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.870401   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.870499   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:15.870616   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:15.870830   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:15.870850   33166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:42:15.978458   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:42:15.978498   33166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:42:15.978519   33166 buildroot.go:174] setting up certificates
	I0531 18:42:15.978543   33166 provision.go:84] configureAuth start
	I0531 18:42:15.978558   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetMachineName
	I0531 18:42:15.978824   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:42:15.981244   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.981530   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.981572   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.981708   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.983573   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.983926   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.983958   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.984074   33166 provision.go:143] copyHostCerts
	I0531 18:42:15.984107   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:42:15.984147   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:42:15.984159   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:42:15.984237   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:42:15.984333   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:42:15.984358   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:42:15.984365   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:42:15.984405   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:42:15.984469   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:42:15.984493   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:42:15.984502   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:42:15.984533   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:42:15.984597   33166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809-m02 san=[127.0.0.1 192.168.39.51 ha-080809-m02 localhost minikube]
	I0531 18:42:16.325482   33166 provision.go:177] copyRemoteCerts
	I0531 18:42:16.325546   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:42:16.325575   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.327934   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.328161   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.328199   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.328332   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.328510   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.328645   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.328785   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:42:16.406957   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:42:16.407024   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0531 18:42:16.430210   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:42:16.430255   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 18:42:16.453025   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:42:16.453079   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:42:16.475768   33166 provision.go:87] duration metric: took 497.215522ms to configureAuth
	I0531 18:42:16.475793   33166 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:42:16.475950   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:42:16.476058   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.478460   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.478785   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.478817   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.478962   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.479159   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.479341   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.479507   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.479682   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:16.479878   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:16.479895   33166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:42:16.738800   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:42:16.738832   33166 main.go:141] libmachine: Checking connection to Docker...
	I0531 18:42:16.738842   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetURL
	I0531 18:42:16.740084   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using libvirt version 6000000
	I0531 18:42:16.742151   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.742474   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.742502   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.742652   33166 main.go:141] libmachine: Docker is up and running!
	I0531 18:42:16.742669   33166 main.go:141] libmachine: Reticulating splines...
	I0531 18:42:16.742677   33166 client.go:171] duration metric: took 28.545039675s to LocalClient.Create
	I0531 18:42:16.742705   33166 start.go:167] duration metric: took 28.54510089s to libmachine.API.Create "ha-080809"
	I0531 18:42:16.742718   33166 start.go:293] postStartSetup for "ha-080809-m02" (driver="kvm2")
	I0531 18:42:16.742732   33166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:42:16.742764   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.742985   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:42:16.743009   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.745196   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.745509   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.745534   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.745637   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.745808   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.745959   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.746087   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:42:16.827714   33166 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:42:16.832039   33166 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:42:16.832061   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:42:16.832132   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:42:16.832218   33166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:42:16.832228   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:42:16.832323   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:42:16.841372   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:42:16.863828   33166 start.go:296] duration metric: took 121.097824ms for postStartSetup
	I0531 18:42:16.863867   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetConfigRaw
	I0531 18:42:16.864426   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:42:16.866895   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.867277   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.867306   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.867558   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:42:16.867729   33166 start.go:128] duration metric: took 28.688101798s to createHost
	I0531 18:42:16.867750   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.869644   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.869998   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.870022   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.870173   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.870343   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.870464   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.870576   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.870708   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:16.870856   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:16.870867   33166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:42:16.965713   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717180936.947375000
	
	I0531 18:42:16.965737   33166 fix.go:216] guest clock: 1717180936.947375000
	I0531 18:42:16.965747   33166 fix.go:229] Guest: 2024-05-31 18:42:16.947375 +0000 UTC Remote: 2024-05-31 18:42:16.867740303 +0000 UTC m=+87.580014158 (delta=79.634697ms)
	I0531 18:42:16.965762   33166 fix.go:200] guest clock delta is within tolerance: 79.634697ms
	I0531 18:42:16.965767   33166 start.go:83] releasing machines lock for "ha-080809-m02", held for 28.786242379s
	I0531 18:42:16.965786   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.966024   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:42:16.968685   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.969049   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.969072   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.971421   33166 out.go:177] * Found network options:
	I0531 18:42:16.973067   33166 out.go:177]   - NO_PROXY=192.168.39.2
	W0531 18:42:16.974146   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 18:42:16.974176   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.974677   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.974907   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.975025   33166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:42:16.975064   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	W0531 18:42:16.975109   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 18:42:16.975183   33166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:42:16.975203   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.977519   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.977840   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.977867   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.977892   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.978028   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.978286   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.978308   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.978311   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.978462   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.978470   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.978650   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.978675   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:42:16.978756   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.978852   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:42:17.206888   33166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:42:17.213726   33166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:42:17.213783   33166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:42:17.230408   33166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 18:42:17.230429   33166 start.go:494] detecting cgroup driver to use...
	I0531 18:42:17.230480   33166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:42:17.248351   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:42:17.263119   33166 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:42:17.263175   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:42:17.278088   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:42:17.291629   33166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:42:17.410123   33166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:42:17.568505   33166 docker.go:233] disabling docker service ...
	I0531 18:42:17.568569   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:42:17.583008   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:42:17.595886   33166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:42:17.728751   33166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:42:17.857263   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:42:17.872288   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:42:17.891225   33166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:42:17.891286   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.903161   33166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:42:17.903218   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.913466   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.923296   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.933772   33166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:42:17.944134   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.954157   33166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.973486   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.983927   33166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:42:17.993512   33166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 18:42:17.993558   33166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 18:42:18.007549   33166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:42:18.016858   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:42:18.136685   33166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:42:18.293489   33166 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:42:18.293572   33166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:42:18.299706   33166 start.go:562] Will wait 60s for crictl version
	I0531 18:42:18.299765   33166 ssh_runner.go:195] Run: which crictl
	I0531 18:42:18.303554   33166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:42:18.344243   33166 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:42:18.344336   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:42:18.372229   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:42:18.400711   33166 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:42:18.401905   33166 out.go:177]   - env NO_PROXY=192.168.39.2
	I0531 18:42:18.402950   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:42:18.405564   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:18.405931   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:18.405958   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:18.406102   33166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:42:18.410187   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:42:18.422505   33166 mustload.go:65] Loading cluster: ha-080809
	I0531 18:42:18.422677   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:42:18.422911   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:42:18.422951   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:42:18.437290   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0531 18:42:18.437653   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:42:18.438074   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:42:18.438095   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:42:18.438389   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:42:18.438565   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:42:18.440083   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:42:18.440383   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:42:18.440421   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:42:18.453809   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0531 18:42:18.454184   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:42:18.454752   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:42:18.454771   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:42:18.455085   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:42:18.455271   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:42:18.455419   33166 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.51
	I0531 18:42:18.455430   33166 certs.go:194] generating shared ca certs ...
	I0531 18:42:18.455447   33166 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:42:18.455548   33166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:42:18.455581   33166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:42:18.455591   33166 certs.go:256] generating profile certs ...
	I0531 18:42:18.455654   33166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:42:18.455677   33166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.7ff96351
	I0531 18:42:18.455690   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.7ff96351 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.51 192.168.39.254]
	I0531 18:42:18.757388   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.7ff96351 ...
	I0531 18:42:18.757413   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.7ff96351: {Name:mk418eef2b1b5b94d8c904d182dbe3b875e7ee9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:42:18.757570   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.7ff96351 ...
	I0531 18:42:18.757584   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.7ff96351: {Name:mk358ccbd8c3bd1a84fbc48b0c1c3365acd5707c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:42:18.757675   33166 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.7ff96351 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:42:18.757811   33166 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.7ff96351 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:42:18.757934   33166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:42:18.757949   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:42:18.757962   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:42:18.757975   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:42:18.757987   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:42:18.757998   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:42:18.758009   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:42:18.758021   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:42:18.758035   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:42:18.758081   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:42:18.758106   33166 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:42:18.758115   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:42:18.758136   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:42:18.758157   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:42:18.758178   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:42:18.758213   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:42:18.758238   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:42:18.758256   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:42:18.758267   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:42:18.758296   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:42:18.761276   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:42:18.761655   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:42:18.761681   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:42:18.761855   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:42:18.762061   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:42:18.762213   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:42:18.762333   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:42:18.837272   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0531 18:42:18.843046   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0531 18:42:18.854961   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0531 18:42:18.859461   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0531 18:42:18.870098   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0531 18:42:18.873977   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0531 18:42:18.884607   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0531 18:42:18.888618   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0531 18:42:18.898666   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0531 18:42:18.902718   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0531 18:42:18.914386   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0531 18:42:18.918527   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0531 18:42:18.928780   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:42:18.954314   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:42:18.981552   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:42:19.008971   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:42:19.036307   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0531 18:42:19.071878   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 18:42:19.094850   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:42:19.118913   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:42:19.142462   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:42:19.166700   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:42:19.190475   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:42:19.213581   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0531 18:42:19.230209   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0531 18:42:19.246490   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0531 18:42:19.262496   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0531 18:42:19.279299   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0531 18:42:19.295348   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0531 18:42:19.311727   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0531 18:42:19.327108   33166 ssh_runner.go:195] Run: openssl version
	I0531 18:42:19.332562   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:42:19.342869   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:42:19.347193   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:42:19.347236   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:42:19.352873   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:42:19.363861   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:42:19.374647   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:42:19.378831   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:42:19.378866   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:42:19.384243   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:42:19.394718   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:42:19.405020   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:42:19.409207   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:42:19.409267   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:42:19.414941   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:42:19.428266   33166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:42:19.432619   33166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 18:42:19.432675   33166 kubeadm.go:928] updating node {m02 192.168.39.51 8443 v1.30.1 crio true true} ...
	I0531 18:42:19.432748   33166 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:42:19.432769   33166 kube-vip.go:115] generating kube-vip config ...
	I0531 18:42:19.432806   33166 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:42:19.450389   33166 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:42:19.450450   33166 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:42:19.450507   33166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:42:19.460335   33166 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0531 18:42:19.460396   33166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0531 18:42:19.469942   33166 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0531 18:42:19.469965   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0531 18:42:19.470035   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0531 18:42:19.470044   33166 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0531 18:42:19.470049   33166 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0531 18:42:19.474337   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0531 18:42:19.474365   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0531 18:42:20.790275   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0531 18:42:20.790371   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0531 18:42:20.795470   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0531 18:42:20.795503   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0531 18:42:20.936936   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:42:20.970319   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0531 18:42:20.970410   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0531 18:42:20.986184   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0531 18:42:20.986230   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0531 18:42:21.426654   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0531 18:42:21.436524   33166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0531 18:42:21.454081   33166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:42:21.471128   33166 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 18:42:21.488405   33166 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:42:21.492365   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:42:21.504513   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:42:21.623038   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:42:21.640365   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:42:21.640724   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:42:21.640773   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:42:21.655291   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0531 18:42:21.655806   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:42:21.656334   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:42:21.656364   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:42:21.656637   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:42:21.656842   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:42:21.657012   33166 start.go:316] joinCluster: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:42:21.657109   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0531 18:42:21.657130   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:42:21.660196   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:42:21.660701   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:42:21.660732   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:42:21.660920   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:42:21.661112   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:42:21.661264   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:42:21.661401   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:42:21.831919   33166 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:42:21.831952   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bc22m0.jeczlb21vma4k8v2 --discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-080809-m02 --control-plane --apiserver-advertise-address=192.168.39.51 --apiserver-bind-port=8443"
	I0531 18:42:43.237812   33166 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bc22m0.jeczlb21vma4k8v2 --discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-080809-m02 --control-plane --apiserver-advertise-address=192.168.39.51 --apiserver-bind-port=8443": (21.405833616s)
	I0531 18:42:43.237858   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0531 18:42:43.790411   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-080809-m02 minikube.k8s.io/updated_at=2024_05_31T18_42_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404 minikube.k8s.io/name=ha-080809 minikube.k8s.io/primary=false
	I0531 18:42:43.953718   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-080809-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0531 18:42:44.073425   33166 start.go:318] duration metric: took 22.416410562s to joinCluster
	I0531 18:42:44.073491   33166 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:42:44.075048   33166 out.go:177] * Verifying Kubernetes components...
	I0531 18:42:44.073770   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:42:44.076497   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:42:44.324056   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:42:44.349686   33166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:42:44.350031   33166 kapi.go:59] client config for ha-080809: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt", KeyFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key", CAFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0531 18:42:44.350111   33166 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.2:8443
	I0531 18:42:44.350368   33166 node_ready.go:35] waiting up to 6m0s for node "ha-080809-m02" to be "Ready" ...
	I0531 18:42:44.350465   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:44.350475   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:44.350486   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:44.350495   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:44.362695   33166 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0531 18:42:44.850794   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:44.850820   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:44.850830   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:44.850834   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:44.854745   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:45.351052   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:45.351077   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:45.351085   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:45.351088   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:45.355423   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:45.850662   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:45.850740   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:45.850763   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:45.850780   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:45.858050   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:42:46.350633   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:46.350655   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:46.350664   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:46.350668   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:46.353852   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:46.354524   33166 node_ready.go:53] node "ha-080809-m02" has status "Ready":"False"
	I0531 18:42:46.851298   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:46.851317   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:46.851328   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:46.851332   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:46.854597   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:47.350916   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:47.350944   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:47.350957   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:47.350963   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:47.354470   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:47.850569   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:47.850592   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:47.850603   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:47.850608   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:47.856495   33166 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 18:42:48.351061   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:48.351081   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:48.351088   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:48.351092   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:48.354373   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:48.355120   33166 node_ready.go:53] node "ha-080809-m02" has status "Ready":"False"
	I0531 18:42:48.851198   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:48.851226   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:48.851237   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:48.851244   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:48.854416   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:49.351159   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:49.351184   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:49.351195   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:49.351202   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:49.354786   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:49.851306   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:49.851332   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:49.851342   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:49.851348   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:49.855199   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:50.351196   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:50.351220   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:50.351227   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:50.351231   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:50.354966   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:50.355755   33166 node_ready.go:53] node "ha-080809-m02" has status "Ready":"False"
	I0531 18:42:50.851037   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:50.851059   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:50.851065   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:50.851069   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:50.853880   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.350885   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:51.350913   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.350921   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.350925   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.363380   33166 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0531 18:42:51.850720   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:51.850745   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.850757   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.850763   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.855049   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:51.855558   33166 node_ready.go:49] node "ha-080809-m02" has status "Ready":"True"
	I0531 18:42:51.855575   33166 node_ready.go:38] duration metric: took 7.505175551s for node "ha-080809-m02" to be "Ready" ...
	I0531 18:42:51.855582   33166 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:42:51.855632   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:42:51.855641   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.855648   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.855654   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.859768   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:51.865277   33166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.865339   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr2wg
	I0531 18:42:51.865347   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.865354   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.865358   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.873796   33166 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0531 18:42:51.874368   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:51.874382   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.874389   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.874393   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.876561   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.877090   33166 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:51.877114   33166 pod_ready.go:81] duration metric: took 11.814298ms for pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.877125   33166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.877186   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbkpk
	I0531 18:42:51.877197   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.877207   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.877222   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.879561   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.880188   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:51.880204   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.880215   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.880218   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.882374   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.882875   33166 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:51.882898   33166 pod_ready.go:81] duration metric: took 5.765051ms for pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.882910   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.882961   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809
	I0531 18:42:51.882972   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.882980   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.882984   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.885252   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.885821   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:51.885841   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.885849   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.885856   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.888077   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.888647   33166 pod_ready.go:92] pod "etcd-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:51.888667   33166 pod_ready.go:81] duration metric: took 5.750422ms for pod "etcd-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.888677   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.888730   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:51.888739   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.888747   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.888751   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.890856   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.891457   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:51.891471   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.891477   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.891481   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.893391   33166 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 18:42:52.389441   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:52.389462   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:52.389469   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:52.389473   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:52.392807   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:52.393536   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:52.393552   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:52.393559   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:52.393564   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:52.395933   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:52.888976   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:52.888998   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:52.889006   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:52.889009   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:52.891993   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:52.892662   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:52.892677   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:52.892683   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:52.892686   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:52.895477   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.389535   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:53.389568   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.389578   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.389582   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.393164   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:53.394092   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:53.394110   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.394120   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.394127   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.396914   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.889839   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:53.889863   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.889870   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.889874   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.892989   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:53.893783   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:53.893797   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.893805   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.893810   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.896281   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.896811   33166 pod_ready.go:92] pod "etcd-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:53.896827   33166 pod_ready.go:81] duration metric: took 2.00814425s for pod "etcd-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:53.896840   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:53.896881   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809
	I0531 18:42:53.896888   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.896894   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.896899   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.899252   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.899994   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:53.900007   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.900014   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.900017   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.902370   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.902818   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:53.902833   33166 pod_ready.go:81] duration metric: took 5.987724ms for pod "kube-apiserver-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:53.902841   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:54.051219   33166 request.go:629] Waited for 148.326366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:54.051311   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:54.051321   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.051328   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.051334   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.054339   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:54.251307   33166 request.go:629] Waited for 196.349531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:54.251362   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:54.251381   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.251390   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.251396   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.254484   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:54.451091   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:54.451116   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.451127   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.451133   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.454734   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:54.650867   33166 request.go:629] Waited for 195.275171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:54.650930   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:54.650935   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.650942   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.650946   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.654049   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:54.903699   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:54.903721   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.903729   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.903732   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.906961   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:55.050923   33166 request.go:629] Waited for 143.259066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:55.050976   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:55.050981   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.050988   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.050991   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.054289   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:55.403839   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:55.403863   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.403874   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.403879   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.407563   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:55.451568   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:55.451592   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.451602   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.451608   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.454745   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:55.455446   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:55.455466   33166 pod_ready.go:81] duration metric: took 1.552619438s for pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:55.455478   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:55.650834   33166 request.go:629] Waited for 195.287771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809
	I0531 18:42:55.650901   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809
	I0531 18:42:55.650912   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.650923   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.650930   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.653798   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:55.850878   33166 request.go:629] Waited for 196.310301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:55.850939   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:55.850946   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.850956   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.850966   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.853765   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:55.854398   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:55.854421   33166 pod_ready.go:81] duration metric: took 398.933767ms for pod "kube-controller-manager-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:55.854433   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:56.051459   33166 request.go:629] Waited for 196.914204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.051522   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.051528   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.051534   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.051540   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.054787   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:56.250949   33166 request.go:629] Waited for 195.283991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:56.251010   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:56.251017   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.251028   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.251033   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.253708   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:56.451520   33166 request.go:629] Waited for 96.309549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.451584   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.451590   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.451597   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.451600   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.455066   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:56.651079   33166 request.go:629] Waited for 195.352948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:56.651137   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:56.651144   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.651163   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.651175   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.654377   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:56.855568   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.855591   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.855598   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.855602   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.858651   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:57.050986   33166 request.go:629] Waited for 191.303858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.051050   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.051056   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.051063   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.051069   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.054339   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:57.355386   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:57.355405   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.355413   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.355416   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.358572   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:57.451447   33166 request.go:629] Waited for 92.186798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.451539   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.451554   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.451561   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.451565   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.458668   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:42:57.855525   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:57.855544   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.855552   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.855556   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.861544   33166 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 18:42:57.862763   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.862778   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.862786   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.862792   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.865541   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:57.866079   33166 pod_ready.go:102] pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace has status "Ready":"False"
	I0531 18:42:58.354931   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:58.354954   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:58.354962   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:58.354966   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:58.379542   33166 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0531 18:42:58.380302   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:58.380319   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:58.380326   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:58.380329   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:58.383420   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:58.855138   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:58.855163   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:58.855171   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:58.855176   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:58.857977   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:58.858525   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:58.858541   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:58.858548   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:58.858551   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:58.861345   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:59.354614   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:59.354641   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.354651   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.354656   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.357676   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:59.358392   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:59.358410   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.358419   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.358424   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.361127   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:59.855616   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:59.855641   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.855651   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.855658   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.859977   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:59.861278   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:59.861294   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.861302   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.861306   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.865613   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:59.866536   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:59.866554   33166 pod_ready.go:81] duration metric: took 4.012114731s for pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:59.866563   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dmg2w" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:59.866613   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmg2w
	I0531 18:42:59.866621   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.866628   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.866632   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.871593   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:59.872293   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:59.872307   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.872314   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.872318   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.875022   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:59.875576   33166 pod_ready.go:92] pod "kube-proxy-dmg2w" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:59.875593   33166 pod_ready.go:81] duration metric: took 9.024494ms for pod "kube-proxy-dmg2w" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:59.875601   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxq4h" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.050979   33166 request.go:629] Waited for 175.301531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxq4h
	I0531 18:43:00.051035   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxq4h
	I0531 18:43:00.051040   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.051046   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.051051   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.054574   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:00.251779   33166 request.go:629] Waited for 196.439477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:43:00.251828   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:43:00.251834   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.251841   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.251845   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.255454   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:00.256066   33166 pod_ready.go:92] pod "kube-proxy-wxq4h" in "kube-system" namespace has status "Ready":"True"
	I0531 18:43:00.256086   33166 pod_ready.go:81] duration metric: took 380.479214ms for pod "kube-proxy-wxq4h" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.256094   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.451226   33166 request.go:629] Waited for 195.068355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809
	I0531 18:43:00.451301   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809
	I0531 18:43:00.451306   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.451313   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.451321   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.454409   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:00.651355   33166 request.go:629] Waited for 196.379921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:43:00.651429   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:43:00.651434   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.651441   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.651446   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.654722   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:00.655487   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:43:00.655503   33166 pod_ready.go:81] duration metric: took 399.403426ms for pod "kube-scheduler-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.655512   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.851616   33166 request.go:629] Waited for 196.041128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m02
	I0531 18:43:00.851664   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m02
	I0531 18:43:00.851669   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.851677   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.851682   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.854518   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:43:01.051539   33166 request.go:629] Waited for 196.3823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:43:01.051602   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:43:01.051609   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.051620   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.051626   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.054933   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:01.055427   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:43:01.055447   33166 pod_ready.go:81] duration metric: took 399.927817ms for pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:01.055459   33166 pod_ready.go:38] duration metric: took 9.19986509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:43:01.055477   33166 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:43:01.055537   33166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:43:01.083139   33166 api_server.go:72] duration metric: took 17.009610466s to wait for apiserver process to appear ...
	I0531 18:43:01.083164   33166 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:43:01.083194   33166 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0531 18:43:01.091980   33166 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0531 18:43:01.092058   33166 round_trippers.go:463] GET https://192.168.39.2:8443/version
	I0531 18:43:01.092070   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.092081   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.092090   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.093732   33166 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 18:43:01.093919   33166 api_server.go:141] control plane version: v1.30.1
	I0531 18:43:01.093939   33166 api_server.go:131] duration metric: took 10.766155ms to wait for apiserver health ...
	I0531 18:43:01.093950   33166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:43:01.251335   33166 request.go:629] Waited for 157.32352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:43:01.251411   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:43:01.251416   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.251424   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.251430   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.256259   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:43:01.260301   33166 system_pods.go:59] 17 kube-system pods found
	I0531 18:43:01.260325   33166 system_pods.go:61] "coredns-7db6d8ff4d-cr2wg" [9f90f810-88d7-443c-8537-1eb0c65417be] Running
	I0531 18:43:01.260329   33166 system_pods.go:61] "coredns-7db6d8ff4d-kbkpk" [1de171b4-ac9e-42d1-be02-d585d0d01587] Running
	I0531 18:43:01.260333   33166 system_pods.go:61] "etcd-ha-080809" [56a31782-586f-4031-b3af-14b3079465c9] Running
	I0531 18:43:01.260337   33166 system_pods.go:61] "etcd-ha-080809-m02" [6b4fa1c2-be34-4fa6-b600-f5214c23b9de] Running
	I0531 18:43:01.260340   33166 system_pods.go:61] "kindnet-6jqp4" [97ee357b-3e8c-411b-bc52-f6efdce9b8ed] Running
	I0531 18:43:01.260344   33166 system_pods.go:61] "kindnet-fzzfx" [1f8aa910-a7fc-44cf-88ad-dbf14049ac1d] Running
	I0531 18:43:01.260349   33166 system_pods.go:61] "kube-apiserver-ha-080809" [d927093c-9ff3-4405-989e-32aaead2e89a] Running
	I0531 18:43:01.260352   33166 system_pods.go:61] "kube-apiserver-ha-080809-m02" [0f1c03da-2e74-469e-8e35-4bb51cdb69e2] Running
	I0531 18:43:01.260356   33166 system_pods.go:61] "kube-controller-manager-ha-080809" [f267f923-1e85-413b-be64-a45fe7a71acf] Running
	I0531 18:43:01.260360   33166 system_pods.go:61] "kube-controller-manager-ha-080809-m02" [10424961-670c-4578-8be1-48480d19c17b] Running
	I0531 18:43:01.260363   33166 system_pods.go:61] "kube-proxy-dmg2w" [50600abe-9fae-41f8-8ff2-05c8407163c0] Running
	I0531 18:43:01.260366   33166 system_pods.go:61] "kube-proxy-wxq4h" [f8c4e91b-a913-4baf-ad50-44d14b9d6792] Running
	I0531 18:43:01.260369   33166 system_pods.go:61] "kube-scheduler-ha-080809" [e739cfb0-a779-4262-8617-8f66e77c9b14] Running
	I0531 18:43:01.260371   33166 system_pods.go:61] "kube-scheduler-ha-080809-m02" [f7dcba0b-7dbd-44df-8344-a5c36a3437b6] Running
	I0531 18:43:01.260374   33166 system_pods.go:61] "kube-vip-ha-080809" [12f7c64f-8e4d-4c2d-ae6f-13fc270d732b] Running
	I0531 18:43:01.260377   33166 system_pods.go:61] "kube-vip-ha-080809-m02" [7370bb36-f4fc-4885-a7f1-c7db2f6ae58d] Running
	I0531 18:43:01.260380   33166 system_pods.go:61] "storage-provisioner" [11521630-b23f-4574-9eda-8d0e5d0b7e10] Running
	I0531 18:43:01.260386   33166 system_pods.go:74] duration metric: took 166.430841ms to wait for pod list to return data ...
	I0531 18:43:01.260396   33166 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:43:01.450731   33166 request.go:629] Waited for 190.27249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:43:01.450817   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:43:01.450829   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.450839   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.450846   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.453536   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:43:01.453728   33166 default_sa.go:45] found service account: "default"
	I0531 18:43:01.453746   33166 default_sa.go:55] duration metric: took 193.343475ms for default service account to be created ...
	I0531 18:43:01.453755   33166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:43:01.651175   33166 request.go:629] Waited for 197.353935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:43:01.651245   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:43:01.651250   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.651257   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.651261   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.658681   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:43:01.662823   33166 system_pods.go:86] 17 kube-system pods found
	I0531 18:43:01.662846   33166 system_pods.go:89] "coredns-7db6d8ff4d-cr2wg" [9f90f810-88d7-443c-8537-1eb0c65417be] Running
	I0531 18:43:01.662851   33166 system_pods.go:89] "coredns-7db6d8ff4d-kbkpk" [1de171b4-ac9e-42d1-be02-d585d0d01587] Running
	I0531 18:43:01.662855   33166 system_pods.go:89] "etcd-ha-080809" [56a31782-586f-4031-b3af-14b3079465c9] Running
	I0531 18:43:01.662859   33166 system_pods.go:89] "etcd-ha-080809-m02" [6b4fa1c2-be34-4fa6-b600-f5214c23b9de] Running
	I0531 18:43:01.662864   33166 system_pods.go:89] "kindnet-6jqp4" [97ee357b-3e8c-411b-bc52-f6efdce9b8ed] Running
	I0531 18:43:01.662868   33166 system_pods.go:89] "kindnet-fzzfx" [1f8aa910-a7fc-44cf-88ad-dbf14049ac1d] Running
	I0531 18:43:01.662872   33166 system_pods.go:89] "kube-apiserver-ha-080809" [d927093c-9ff3-4405-989e-32aaead2e89a] Running
	I0531 18:43:01.662876   33166 system_pods.go:89] "kube-apiserver-ha-080809-m02" [0f1c03da-2e74-469e-8e35-4bb51cdb69e2] Running
	I0531 18:43:01.662881   33166 system_pods.go:89] "kube-controller-manager-ha-080809" [f267f923-1e85-413b-be64-a45fe7a71acf] Running
	I0531 18:43:01.662887   33166 system_pods.go:89] "kube-controller-manager-ha-080809-m02" [10424961-670c-4578-8be1-48480d19c17b] Running
	I0531 18:43:01.662890   33166 system_pods.go:89] "kube-proxy-dmg2w" [50600abe-9fae-41f8-8ff2-05c8407163c0] Running
	I0531 18:43:01.662894   33166 system_pods.go:89] "kube-proxy-wxq4h" [f8c4e91b-a913-4baf-ad50-44d14b9d6792] Running
	I0531 18:43:01.662901   33166 system_pods.go:89] "kube-scheduler-ha-080809" [e739cfb0-a779-4262-8617-8f66e77c9b14] Running
	I0531 18:43:01.662904   33166 system_pods.go:89] "kube-scheduler-ha-080809-m02" [f7dcba0b-7dbd-44df-8344-a5c36a3437b6] Running
	I0531 18:43:01.662908   33166 system_pods.go:89] "kube-vip-ha-080809" [12f7c64f-8e4d-4c2d-ae6f-13fc270d732b] Running
	I0531 18:43:01.662912   33166 system_pods.go:89] "kube-vip-ha-080809-m02" [7370bb36-f4fc-4885-a7f1-c7db2f6ae58d] Running
	I0531 18:43:01.662915   33166 system_pods.go:89] "storage-provisioner" [11521630-b23f-4574-9eda-8d0e5d0b7e10] Running
	I0531 18:43:01.662921   33166 system_pods.go:126] duration metric: took 209.162213ms to wait for k8s-apps to be running ...
	I0531 18:43:01.662930   33166 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:43:01.662970   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:43:01.678467   33166 system_svc.go:56] duration metric: took 15.528151ms WaitForService to wait for kubelet
	I0531 18:43:01.678494   33166 kubeadm.go:576] duration metric: took 17.604968594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:43:01.678516   33166 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:43:01.850855   33166 request.go:629] Waited for 172.276315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes
	I0531 18:43:01.850956   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes
	I0531 18:43:01.850972   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.850982   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.850990   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.854853   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:01.855615   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:43:01.855641   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:43:01.855655   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:43:01.855660   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:43:01.855666   33166 node_conditions.go:105] duration metric: took 177.144669ms to run NodePressure ...
	I0531 18:43:01.855678   33166 start.go:240] waiting for startup goroutines ...
	I0531 18:43:01.855702   33166 start.go:254] writing updated cluster config ...
	I0531 18:43:01.857736   33166 out.go:177] 
	I0531 18:43:01.859220   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:43:01.859366   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:43:01.860838   33166 out.go:177] * Starting "ha-080809-m03" control-plane node in "ha-080809" cluster
	I0531 18:43:01.862061   33166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:43:01.862080   33166 cache.go:56] Caching tarball of preloaded images
	I0531 18:43:01.862156   33166 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:43:01.862166   33166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:43:01.862240   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:43:01.862393   33166 start.go:360] acquireMachinesLock for ha-080809-m03: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:43:01.862439   33166 start.go:364] duration metric: took 23.982µs to acquireMachinesLock for "ha-080809-m03"
	I0531 18:43:01.862463   33166 start.go:93] Provisioning new machine with config: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:43:01.862552   33166 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0531 18:43:01.863925   33166 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0531 18:43:01.863991   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:43:01.864021   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:43:01.878423   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I0531 18:43:01.878800   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:43:01.879264   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:43:01.879287   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:43:01.879564   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:43:01.879751   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetMachineName
	I0531 18:43:01.879886   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:01.880025   33166 start.go:159] libmachine.API.Create for "ha-080809" (driver="kvm2")
	I0531 18:43:01.880055   33166 client.go:168] LocalClient.Create starting
	I0531 18:43:01.880086   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 18:43:01.880115   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:43:01.880130   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:43:01.880176   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 18:43:01.880193   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:43:01.880204   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:43:01.880220   33166 main.go:141] libmachine: Running pre-create checks...
	I0531 18:43:01.880227   33166 main.go:141] libmachine: (ha-080809-m03) Calling .PreCreateCheck
	I0531 18:43:01.880374   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetConfigRaw
	I0531 18:43:01.880666   33166 main.go:141] libmachine: Creating machine...
	I0531 18:43:01.880678   33166 main.go:141] libmachine: (ha-080809-m03) Calling .Create
	I0531 18:43:01.880772   33166 main.go:141] libmachine: (ha-080809-m03) Creating KVM machine...
	I0531 18:43:01.881991   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found existing default KVM network
	I0531 18:43:01.882124   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found existing private KVM network mk-ha-080809
	I0531 18:43:01.882234   33166 main.go:141] libmachine: (ha-080809-m03) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03 ...
	I0531 18:43:01.882261   33166 main.go:141] libmachine: (ha-080809-m03) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 18:43:01.882329   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:01.882228   33952 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:43:01.882387   33166 main.go:141] libmachine: (ha-080809-m03) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 18:43:02.090721   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:02.090579   33952 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa...
	I0531 18:43:02.230526   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:02.230407   33952 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/ha-080809-m03.rawdisk...
	I0531 18:43:02.230557   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Writing magic tar header
	I0531 18:43:02.230571   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Writing SSH key tar header
	I0531 18:43:02.230587   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:02.230512   33952 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03 ...
	I0531 18:43:02.230604   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03
	I0531 18:43:02.230643   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03 (perms=drwx------)
	I0531 18:43:02.230655   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 18:43:02.230664   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 18:43:02.230672   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 18:43:02.230687   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 18:43:02.230700   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:43:02.230711   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 18:43:02.230724   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 18:43:02.230732   33166 main.go:141] libmachine: (ha-080809-m03) Creating domain...
	I0531 18:43:02.230740   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 18:43:02.230748   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 18:43:02.230763   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins
	I0531 18:43:02.230776   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home
	I0531 18:43:02.230791   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Skipping /home - not owner
	I0531 18:43:02.231711   33166 main.go:141] libmachine: (ha-080809-m03) define libvirt domain using xml: 
	I0531 18:43:02.231730   33166 main.go:141] libmachine: (ha-080809-m03) <domain type='kvm'>
	I0531 18:43:02.231740   33166 main.go:141] libmachine: (ha-080809-m03)   <name>ha-080809-m03</name>
	I0531 18:43:02.231749   33166 main.go:141] libmachine: (ha-080809-m03)   <memory unit='MiB'>2200</memory>
	I0531 18:43:02.231757   33166 main.go:141] libmachine: (ha-080809-m03)   <vcpu>2</vcpu>
	I0531 18:43:02.231771   33166 main.go:141] libmachine: (ha-080809-m03)   <features>
	I0531 18:43:02.231783   33166 main.go:141] libmachine: (ha-080809-m03)     <acpi/>
	I0531 18:43:02.231797   33166 main.go:141] libmachine: (ha-080809-m03)     <apic/>
	I0531 18:43:02.231809   33166 main.go:141] libmachine: (ha-080809-m03)     <pae/>
	I0531 18:43:02.231818   33166 main.go:141] libmachine: (ha-080809-m03)     
	I0531 18:43:02.231826   33166 main.go:141] libmachine: (ha-080809-m03)   </features>
	I0531 18:43:02.231837   33166 main.go:141] libmachine: (ha-080809-m03)   <cpu mode='host-passthrough'>
	I0531 18:43:02.231886   33166 main.go:141] libmachine: (ha-080809-m03)   
	I0531 18:43:02.231909   33166 main.go:141] libmachine: (ha-080809-m03)   </cpu>
	I0531 18:43:02.231927   33166 main.go:141] libmachine: (ha-080809-m03)   <os>
	I0531 18:43:02.231938   33166 main.go:141] libmachine: (ha-080809-m03)     <type>hvm</type>
	I0531 18:43:02.231952   33166 main.go:141] libmachine: (ha-080809-m03)     <boot dev='cdrom'/>
	I0531 18:43:02.231961   33166 main.go:141] libmachine: (ha-080809-m03)     <boot dev='hd'/>
	I0531 18:43:02.231975   33166 main.go:141] libmachine: (ha-080809-m03)     <bootmenu enable='no'/>
	I0531 18:43:02.231985   33166 main.go:141] libmachine: (ha-080809-m03)   </os>
	I0531 18:43:02.231996   33166 main.go:141] libmachine: (ha-080809-m03)   <devices>
	I0531 18:43:02.232012   33166 main.go:141] libmachine: (ha-080809-m03)     <disk type='file' device='cdrom'>
	I0531 18:43:02.232049   33166 main.go:141] libmachine: (ha-080809-m03)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/boot2docker.iso'/>
	I0531 18:43:02.232076   33166 main.go:141] libmachine: (ha-080809-m03)       <target dev='hdc' bus='scsi'/>
	I0531 18:43:02.232090   33166 main.go:141] libmachine: (ha-080809-m03)       <readonly/>
	I0531 18:43:02.232100   33166 main.go:141] libmachine: (ha-080809-m03)     </disk>
	I0531 18:43:02.232113   33166 main.go:141] libmachine: (ha-080809-m03)     <disk type='file' device='disk'>
	I0531 18:43:02.232125   33166 main.go:141] libmachine: (ha-080809-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 18:43:02.232140   33166 main.go:141] libmachine: (ha-080809-m03)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/ha-080809-m03.rawdisk'/>
	I0531 18:43:02.232156   33166 main.go:141] libmachine: (ha-080809-m03)       <target dev='hda' bus='virtio'/>
	I0531 18:43:02.232167   33166 main.go:141] libmachine: (ha-080809-m03)     </disk>
	I0531 18:43:02.232178   33166 main.go:141] libmachine: (ha-080809-m03)     <interface type='network'>
	I0531 18:43:02.232192   33166 main.go:141] libmachine: (ha-080809-m03)       <source network='mk-ha-080809'/>
	I0531 18:43:02.232201   33166 main.go:141] libmachine: (ha-080809-m03)       <model type='virtio'/>
	I0531 18:43:02.232213   33166 main.go:141] libmachine: (ha-080809-m03)     </interface>
	I0531 18:43:02.232228   33166 main.go:141] libmachine: (ha-080809-m03)     <interface type='network'>
	I0531 18:43:02.232241   33166 main.go:141] libmachine: (ha-080809-m03)       <source network='default'/>
	I0531 18:43:02.232259   33166 main.go:141] libmachine: (ha-080809-m03)       <model type='virtio'/>
	I0531 18:43:02.232271   33166 main.go:141] libmachine: (ha-080809-m03)     </interface>
	I0531 18:43:02.232278   33166 main.go:141] libmachine: (ha-080809-m03)     <serial type='pty'>
	I0531 18:43:02.232290   33166 main.go:141] libmachine: (ha-080809-m03)       <target port='0'/>
	I0531 18:43:02.232299   33166 main.go:141] libmachine: (ha-080809-m03)     </serial>
	I0531 18:43:02.232307   33166 main.go:141] libmachine: (ha-080809-m03)     <console type='pty'>
	I0531 18:43:02.232317   33166 main.go:141] libmachine: (ha-080809-m03)       <target type='serial' port='0'/>
	I0531 18:43:02.232336   33166 main.go:141] libmachine: (ha-080809-m03)     </console>
	I0531 18:43:02.232353   33166 main.go:141] libmachine: (ha-080809-m03)     <rng model='virtio'>
	I0531 18:43:02.232369   33166 main.go:141] libmachine: (ha-080809-m03)       <backend model='random'>/dev/random</backend>
	I0531 18:43:02.232379   33166 main.go:141] libmachine: (ha-080809-m03)     </rng>
	I0531 18:43:02.232393   33166 main.go:141] libmachine: (ha-080809-m03)     
	I0531 18:43:02.232403   33166 main.go:141] libmachine: (ha-080809-m03)     
	I0531 18:43:02.232414   33166 main.go:141] libmachine: (ha-080809-m03)   </devices>
	I0531 18:43:02.232425   33166 main.go:141] libmachine: (ha-080809-m03) </domain>
	I0531 18:43:02.232435   33166 main.go:141] libmachine: (ha-080809-m03) 
	I0531 18:43:02.238951   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:a0:f1:97 in network default
	I0531 18:43:02.239544   33166 main.go:141] libmachine: (ha-080809-m03) Ensuring networks are active...
	I0531 18:43:02.239563   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:02.240268   33166 main.go:141] libmachine: (ha-080809-m03) Ensuring network default is active
	I0531 18:43:02.240630   33166 main.go:141] libmachine: (ha-080809-m03) Ensuring network mk-ha-080809 is active
	I0531 18:43:02.240936   33166 main.go:141] libmachine: (ha-080809-m03) Getting domain xml...
	I0531 18:43:02.241629   33166 main.go:141] libmachine: (ha-080809-m03) Creating domain...
	I0531 18:43:03.460560   33166 main.go:141] libmachine: (ha-080809-m03) Waiting to get IP...
	I0531 18:43:03.461349   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:03.461672   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:03.461738   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:03.461679   33952 retry.go:31] will retry after 217.587912ms: waiting for machine to come up
	I0531 18:43:03.681278   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:03.681807   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:03.681846   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:03.681764   33952 retry.go:31] will retry after 249.833153ms: waiting for machine to come up
	I0531 18:43:03.933158   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:03.933630   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:03.933651   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:03.933583   33952 retry.go:31] will retry after 474.287257ms: waiting for machine to come up
	I0531 18:43:04.409317   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:04.409819   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:04.409850   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:04.409767   33952 retry.go:31] will retry after 443.148246ms: waiting for machine to come up
	I0531 18:43:04.854034   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:04.854523   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:04.854557   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:04.854474   33952 retry.go:31] will retry after 633.441738ms: waiting for machine to come up
	I0531 18:43:05.489926   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:05.490339   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:05.490364   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:05.490296   33952 retry.go:31] will retry after 851.493119ms: waiting for machine to come up
	I0531 18:43:06.343155   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:06.343539   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:06.343562   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:06.343495   33952 retry.go:31] will retry after 1.128436452s: waiting for machine to come up
	I0531 18:43:07.473066   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:07.473666   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:07.473695   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:07.473619   33952 retry.go:31] will retry after 1.149170287s: waiting for machine to come up
	I0531 18:43:08.624818   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:08.625318   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:08.625345   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:08.625276   33952 retry.go:31] will retry after 1.363964159s: waiting for machine to come up
	I0531 18:43:09.991757   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:09.992190   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:09.992236   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:09.992154   33952 retry.go:31] will retry after 1.823432389s: waiting for machine to come up
	I0531 18:43:11.817974   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:11.818405   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:11.818460   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:11.818380   33952 retry.go:31] will retry after 2.649232561s: waiting for machine to come up
	I0531 18:43:14.469976   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:14.470413   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:14.470440   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:14.470365   33952 retry.go:31] will retry after 2.29103927s: waiting for machine to come up
	I0531 18:43:16.764888   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:16.765395   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:16.765426   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:16.765340   33952 retry.go:31] will retry after 2.933558944s: waiting for machine to come up
	I0531 18:43:19.701947   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:19.702299   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:19.702331   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:19.702262   33952 retry.go:31] will retry after 3.848286184s: waiting for machine to come up
	I0531 18:43:23.551607   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:23.552073   33166 main.go:141] libmachine: (ha-080809-m03) Found IP for machine: 192.168.39.132
	I0531 18:43:23.552098   33166 main.go:141] libmachine: (ha-080809-m03) Reserving static IP address...
	I0531 18:43:23.552113   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has current primary IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:23.552455   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find host DHCP lease matching {name: "ha-080809-m03", mac: "52:54:00:9d:3b:ba", ip: "192.168.39.132"} in network mk-ha-080809
	I0531 18:43:23.621780   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Getting to WaitForSSH function...
	I0531 18:43:23.621811   33166 main.go:141] libmachine: (ha-080809-m03) Reserved static IP address: 192.168.39.132
	I0531 18:43:23.621838   33166 main.go:141] libmachine: (ha-080809-m03) Waiting for SSH to be available...
	I0531 18:43:23.624415   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:23.624835   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809
	I0531 18:43:23.624863   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find defined IP address of network mk-ha-080809 interface with MAC address 52:54:00:9d:3b:ba
	I0531 18:43:23.625089   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using SSH client type: external
	I0531 18:43:23.625118   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa (-rw-------)
	I0531 18:43:23.625145   33166 main.go:141] libmachine: (ha-080809-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:43:23.625159   33166 main.go:141] libmachine: (ha-080809-m03) DBG | About to run SSH command:
	I0531 18:43:23.625172   33166 main.go:141] libmachine: (ha-080809-m03) DBG | exit 0
	I0531 18:43:23.628673   33166 main.go:141] libmachine: (ha-080809-m03) DBG | SSH cmd err, output: exit status 255: 
	I0531 18:43:23.628689   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0531 18:43:23.628696   33166 main.go:141] libmachine: (ha-080809-m03) DBG | command : exit 0
	I0531 18:43:23.628700   33166 main.go:141] libmachine: (ha-080809-m03) DBG | err     : exit status 255
	I0531 18:43:23.628708   33166 main.go:141] libmachine: (ha-080809-m03) DBG | output  : 
	I0531 18:43:26.629183   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Getting to WaitForSSH function...
	I0531 18:43:26.631197   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.631568   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:26.631589   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.631731   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using SSH client type: external
	I0531 18:43:26.631753   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa (-rw-------)
	I0531 18:43:26.631782   33166 main.go:141] libmachine: (ha-080809-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:43:26.631796   33166 main.go:141] libmachine: (ha-080809-m03) DBG | About to run SSH command:
	I0531 18:43:26.631809   33166 main.go:141] libmachine: (ha-080809-m03) DBG | exit 0
	I0531 18:43:26.760999   33166 main.go:141] libmachine: (ha-080809-m03) DBG | SSH cmd err, output: <nil>: 
	I0531 18:43:26.761310   33166 main.go:141] libmachine: (ha-080809-m03) KVM machine creation complete!
	I0531 18:43:26.761644   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetConfigRaw
	I0531 18:43:26.762182   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:26.762499   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:26.762676   33166 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 18:43:26.762688   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:43:26.763935   33166 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 18:43:26.763950   33166 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 18:43:26.763956   33166 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 18:43:26.763965   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:26.766294   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.766690   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:26.766720   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.766886   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:26.767038   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:26.767184   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:26.767314   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:26.767499   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:26.767684   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:26.767696   33166 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 18:43:26.880273   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:43:26.880302   33166 main.go:141] libmachine: Detecting the provisioner...
	I0531 18:43:26.880312   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:26.882898   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.883199   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:26.883223   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.883439   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:26.883634   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:26.883740   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:26.883880   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:26.884072   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:26.884259   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:26.884273   33166 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 18:43:27.001677   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 18:43:27.001734   33166 main.go:141] libmachine: found compatible host: buildroot
	I0531 18:43:27.001740   33166 main.go:141] libmachine: Provisioning with buildroot...
	I0531 18:43:27.001748   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetMachineName
	I0531 18:43:27.001960   33166 buildroot.go:166] provisioning hostname "ha-080809-m03"
	I0531 18:43:27.001993   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetMachineName
	I0531 18:43:27.002200   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.004407   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.004737   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.004767   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.004834   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:27.005035   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.005200   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.005347   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:27.005499   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:27.005663   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:27.005677   33166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809-m03 && echo "ha-080809-m03" | sudo tee /etc/hostname
	I0531 18:43:27.136866   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809-m03
	
	I0531 18:43:27.136888   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.139661   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.140075   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.140104   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.140228   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:27.140384   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.140541   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.140647   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:27.140826   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:27.141041   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:27.141059   33166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:43:27.270861   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:43:27.270890   33166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:43:27.270912   33166 buildroot.go:174] setting up certificates
	I0531 18:43:27.270925   33166 provision.go:84] configureAuth start
	I0531 18:43:27.270936   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetMachineName
	I0531 18:43:27.271182   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:43:27.273872   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.274225   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.274255   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.274398   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.276525   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.276845   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.276865   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.277027   33166 provision.go:143] copyHostCerts
	I0531 18:43:27.277054   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:43:27.277084   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:43:27.277091   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:43:27.277152   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:43:27.277219   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:43:27.277238   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:43:27.277244   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:43:27.277267   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:43:27.277321   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:43:27.277338   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:43:27.277344   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:43:27.277365   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:43:27.277430   33166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809-m03 san=[127.0.0.1 192.168.39.132 ha-080809-m03 localhost minikube]
	I0531 18:43:27.803552   33166 provision.go:177] copyRemoteCerts
	I0531 18:43:27.803612   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:43:27.803645   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.806606   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.807056   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.807095   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.807234   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:27.807456   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.807631   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:27.807759   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:43:27.899155   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:43:27.899249   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:43:27.923813   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:43:27.923885   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0531 18:43:27.949341   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:43:27.949412   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:43:27.973160   33166 provision.go:87] duration metric: took 702.223721ms to configureAuth
	I0531 18:43:27.973189   33166 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:43:27.973463   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:43:27.973543   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.975968   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.976363   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.976385   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.976575   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:27.976766   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.976933   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.977106   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:27.977267   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:27.977415   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:27.977430   33166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:43:28.270929   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:43:28.270953   33166 main.go:141] libmachine: Checking connection to Docker...
	I0531 18:43:28.270968   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetURL
	I0531 18:43:28.272112   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using libvirt version 6000000
	I0531 18:43:28.274617   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.274982   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.275011   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.275176   33166 main.go:141] libmachine: Docker is up and running!
	I0531 18:43:28.275194   33166 main.go:141] libmachine: Reticulating splines...
	I0531 18:43:28.275200   33166 client.go:171] duration metric: took 26.395137472s to LocalClient.Create
	I0531 18:43:28.275226   33166 start.go:167] duration metric: took 26.395199871s to libmachine.API.Create "ha-080809"
	I0531 18:43:28.275239   33166 start.go:293] postStartSetup for "ha-080809-m03" (driver="kvm2")
	I0531 18:43:28.275258   33166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:43:28.275281   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.275551   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:43:28.275579   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:28.277959   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.278331   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.278362   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.278483   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:28.278653   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.278813   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:28.278957   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:43:28.367657   33166 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:43:28.371933   33166 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:43:28.371959   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:43:28.372052   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:43:28.372137   33166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:43:28.372147   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:43:28.372224   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:43:28.381683   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:43:28.405559   33166 start.go:296] duration metric: took 130.30164ms for postStartSetup
	I0531 18:43:28.405609   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetConfigRaw
	I0531 18:43:28.406191   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:43:28.408785   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.409200   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.409226   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.409486   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:43:28.409666   33166 start.go:128] duration metric: took 26.54710274s to createHost
	I0531 18:43:28.409697   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:28.411904   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.412287   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.412307   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.412441   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:28.412631   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.412785   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.412893   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:28.413051   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:28.413211   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:28.413224   33166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:43:28.529683   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717181008.508532399
	
	I0531 18:43:28.529704   33166 fix.go:216] guest clock: 1717181008.508532399
	I0531 18:43:28.529712   33166 fix.go:229] Guest: 2024-05-31 18:43:28.508532399 +0000 UTC Remote: 2024-05-31 18:43:28.409677241 +0000 UTC m=+159.121951096 (delta=98.855158ms)
	I0531 18:43:28.529726   33166 fix.go:200] guest clock delta is within tolerance: 98.855158ms
	I0531 18:43:28.529731   33166 start.go:83] releasing machines lock for "ha-080809-m03", held for 26.667282362s
	I0531 18:43:28.529748   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.529953   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:43:28.532379   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.532669   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.532697   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.535221   33166 out.go:177] * Found network options:
	I0531 18:43:28.536708   33166 out.go:177]   - NO_PROXY=192.168.39.2,192.168.39.51
	W0531 18:43:28.537888   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	W0531 18:43:28.537906   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 18:43:28.537918   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.538415   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.538566   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.538660   33166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:43:28.538697   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	W0531 18:43:28.538702   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	W0531 18:43:28.538716   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 18:43:28.538763   33166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:43:28.538779   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:28.541399   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.541663   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.541752   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.541774   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.541926   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:28.542015   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.542051   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.542082   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.542179   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:28.542247   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:28.542333   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.542407   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:43:28.542441   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:28.542569   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:43:28.788519   33166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:43:28.795611   33166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:43:28.795679   33166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:43:28.812413   33166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 18:43:28.812433   33166 start.go:494] detecting cgroup driver to use...
	I0531 18:43:28.812490   33166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:43:28.831628   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:43:28.847597   33166 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:43:28.847641   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:43:28.862540   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:43:28.877654   33166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:43:28.989967   33166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:43:29.154561   33166 docker.go:233] disabling docker service ...
	I0531 18:43:29.154645   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:43:29.169912   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:43:29.183740   33166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:43:29.306538   33166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:43:29.427978   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:43:29.451329   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:43:29.470342   33166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:43:29.470408   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.481119   33166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:43:29.481183   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.491741   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.501860   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.512075   33166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:43:29.523302   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.534091   33166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.552066   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.563282   33166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:43:29.573627   33166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 18:43:29.573683   33166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 18:43:29.587830   33166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:43:29.597625   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:43:29.725913   33166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:43:29.878013   33166 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:43:29.878089   33166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:43:29.883390   33166 start.go:562] Will wait 60s for crictl version
	I0531 18:43:29.883444   33166 ssh_runner.go:195] Run: which crictl
	I0531 18:43:29.887451   33166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:43:29.924056   33166 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:43:29.924159   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:43:29.952015   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:43:29.988257   33166 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:43:29.989817   33166 out.go:177]   - env NO_PROXY=192.168.39.2
	I0531 18:43:29.991145   33166 out.go:177]   - env NO_PROXY=192.168.39.2,192.168.39.51
	I0531 18:43:29.992302   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:43:29.994728   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:29.995094   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:29.995117   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:29.995331   33166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:43:29.999628   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:43:30.012246   33166 mustload.go:65] Loading cluster: ha-080809
	I0531 18:43:30.012508   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:43:30.012762   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:43:30.012798   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:43:30.029304   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0531 18:43:30.029680   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:43:30.030171   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:43:30.030195   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:43:30.030496   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:43:30.030662   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:43:30.032121   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:43:30.032427   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:43:30.032461   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:43:30.046245   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I0531 18:43:30.046582   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:43:30.047016   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:43:30.047037   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:43:30.047364   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:43:30.047540   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:43:30.047695   33166 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.132
	I0531 18:43:30.047705   33166 certs.go:194] generating shared ca certs ...
	I0531 18:43:30.047717   33166 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:43:30.047815   33166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:43:30.047850   33166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:43:30.047883   33166 certs.go:256] generating profile certs ...
	I0531 18:43:30.047957   33166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:43:30.047980   33166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.325ce834
	I0531 18:43:30.047993   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.325ce834 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.51 192.168.39.132 192.168.39.254]
	I0531 18:43:30.275589   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.325ce834 ...
	I0531 18:43:30.275618   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.325ce834: {Name:mk51db84405b0da738612b01e79c91f0dc2fa702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:43:30.275827   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.325ce834 ...
	I0531 18:43:30.275844   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.325ce834: {Name:mkfa5bc2228ebaecff4f3c4178898d0f2265a665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:43:30.275930   33166 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.325ce834 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:43:30.276059   33166 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.325ce834 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:43:30.276175   33166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:43:30.276189   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:43:30.276200   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:43:30.276211   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:43:30.276223   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:43:30.276237   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:43:30.276251   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:43:30.276263   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:43:30.276275   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:43:30.276322   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:43:30.276347   33166 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:43:30.276357   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:43:30.276378   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:43:30.276399   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:43:30.276421   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:43:30.276460   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:43:30.276484   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:43:30.276498   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:43:30.276510   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:43:30.276539   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:43:30.279570   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:43:30.279975   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:43:30.280005   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:43:30.280164   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:43:30.280393   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:43:30.280590   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:43:30.280732   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:43:30.357227   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0531 18:43:30.362371   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0531 18:43:30.374798   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0531 18:43:30.380056   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0531 18:43:30.394771   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0531 18:43:30.399378   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0531 18:43:30.411373   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0531 18:43:30.417169   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0531 18:43:30.428731   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0531 18:43:30.433027   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0531 18:43:30.443960   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0531 18:43:30.448284   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0531 18:43:30.458960   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:43:30.485217   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:43:30.510052   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:43:30.536115   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:43:30.559594   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0531 18:43:30.586162   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:43:30.612565   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:43:30.638529   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:43:30.662797   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:43:30.686329   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:43:30.710793   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:43:30.734601   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0531 18:43:30.750657   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0531 18:43:30.767001   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0531 18:43:30.783327   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0531 18:43:30.800171   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0531 18:43:30.817581   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0531 18:43:30.834753   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0531 18:43:30.852245   33166 ssh_runner.go:195] Run: openssl version
	I0531 18:43:30.858363   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:43:30.870534   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:43:30.876311   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:43:30.876368   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:43:30.883849   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:43:30.894355   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:43:30.904607   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:43:30.908893   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:43:30.908942   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:43:30.914948   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:43:30.925847   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:43:30.937046   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:43:30.941694   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:43:30.941756   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:43:30.947363   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:43:30.957691   33166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:43:30.961758   33166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 18:43:30.961818   33166 kubeadm.go:928] updating node {m03 192.168.39.132 8443 v1.30.1 crio true true} ...
	I0531 18:43:30.961905   33166 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:43:30.961932   33166 kube-vip.go:115] generating kube-vip config ...
	I0531 18:43:30.961971   33166 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:43:30.978903   33166 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:43:30.978975   33166 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:43:30.979047   33166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:43:30.988579   33166 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0531 18:43:30.988634   33166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0531 18:43:30.998029   33166 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0531 18:43:30.998044   33166 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0531 18:43:30.998040   33166 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0531 18:43:30.998065   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0531 18:43:30.998068   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0531 18:43:30.998077   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:43:30.998137   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0531 18:43:30.998161   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0531 18:43:31.014794   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0531 18:43:31.014830   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0531 18:43:31.014859   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0531 18:43:31.014869   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0531 18:43:31.014900   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0531 18:43:31.014925   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0531 18:43:31.041854   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0531 18:43:31.041899   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0531 18:43:31.894463   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0531 18:43:31.904252   33166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0531 18:43:31.921223   33166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:43:31.937881   33166 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 18:43:31.957239   33166 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:43:31.961508   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:43:31.974977   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:43:32.114508   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:43:32.132751   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:43:32.133262   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:43:32.133314   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:43:32.150687   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0531 18:43:32.151082   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:43:32.151663   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:43:32.151692   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:43:32.152035   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:43:32.152241   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:43:32.152381   33166 start.go:316] joinCluster: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:43:32.152522   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0531 18:43:32.152541   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:43:32.155748   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:43:32.156115   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:43:32.156143   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:43:32.156274   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:43:32.156438   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:43:32.156598   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:43:32.156735   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:43:32.325915   33166 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:43:32.325974   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yfdj6g.iwwai9c61lgm6d3m --discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-080809-m03 --control-plane --apiserver-advertise-address=192.168.39.132 --apiserver-bind-port=8443"
	I0531 18:43:54.924265   33166 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yfdj6g.iwwai9c61lgm6d3m --discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-080809-m03 --control-plane --apiserver-advertise-address=192.168.39.132 --apiserver-bind-port=8443": (22.598262737s)
	I0531 18:43:54.924303   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0531 18:43:55.508360   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-080809-m03 minikube.k8s.io/updated_at=2024_05_31T18_43_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404 minikube.k8s.io/name=ha-080809 minikube.k8s.io/primary=false
	I0531 18:43:55.639237   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-080809-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0531 18:43:55.752944   33166 start.go:318] duration metric: took 23.600557093s to joinCluster
	I0531 18:43:55.753037   33166 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:43:55.754695   33166 out.go:177] * Verifying Kubernetes components...
	I0531 18:43:55.753287   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:43:55.756466   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:43:55.984373   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:43:56.006734   33166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:43:56.007083   33166 kapi.go:59] client config for ha-080809: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt", KeyFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key", CAFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0531 18:43:56.007199   33166 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.2:8443
	I0531 18:43:56.007484   33166 node_ready.go:35] waiting up to 6m0s for node "ha-080809-m03" to be "Ready" ...
	I0531 18:43:56.007582   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:56.007593   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:56.007604   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:56.007625   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:56.010582   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:43:56.508562   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:56.508582   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:56.508590   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:56.508594   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:56.522674   33166 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0531 18:43:57.008457   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:57.008482   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:57.008490   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:57.008495   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:57.012216   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:57.507763   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:57.507785   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:57.507793   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:57.507797   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:57.512220   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:43:58.007967   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:58.007994   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:58.008005   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:58.008011   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:58.011388   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:58.011954   33166 node_ready.go:53] node "ha-080809-m03" has status "Ready":"False"
	I0531 18:43:58.508697   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:58.508716   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:58.508723   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:58.508728   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:58.512461   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:59.008338   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:59.008372   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:59.008384   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:59.008391   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:59.012804   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:43:59.508470   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:59.508492   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:59.508504   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:59.508510   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:59.511548   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:00.008376   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:00.008399   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:00.008408   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:00.008414   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:00.012135   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:00.012892   33166 node_ready.go:53] node "ha-080809-m03" has status "Ready":"False"
	I0531 18:44:00.507925   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:00.508010   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:00.508026   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:00.508033   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:00.512373   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:01.007831   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:01.007862   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:01.007871   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:01.007876   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:01.011604   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:01.507899   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:01.507922   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:01.507932   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:01.507937   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:01.511924   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:02.008201   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:02.008224   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:02.008232   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:02.008236   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:02.011827   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:02.508330   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:02.508355   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:02.508365   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:02.508370   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:02.512758   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:02.513438   33166 node_ready.go:53] node "ha-080809-m03" has status "Ready":"False"
	I0531 18:44:03.008715   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:03.008746   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:03.008757   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:03.008761   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:03.014361   33166 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 18:44:03.507848   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:03.507872   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:03.507883   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:03.507891   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:03.510882   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.007877   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.007903   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.007914   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.007919   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.010907   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.011388   33166 node_ready.go:49] node "ha-080809-m03" has status "Ready":"True"
	I0531 18:44:04.011405   33166 node_ready.go:38] duration metric: took 8.003900807s for node "ha-080809-m03" to be "Ready" ...
	I0531 18:44:04.011413   33166 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:44:04.011466   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:04.011476   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.011483   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.011488   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.018104   33166 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0531 18:44:04.024724   33166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.024807   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr2wg
	I0531 18:44:04.024818   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.024829   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.024836   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.027751   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.028558   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:04.028572   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.028579   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.028582   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.030777   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.031279   33166 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:04.031295   33166 pod_ready.go:81] duration metric: took 6.552615ms for pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.031302   33166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.031351   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbkpk
	I0531 18:44:04.031359   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.031366   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.031369   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.033333   33166 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 18:44:04.033912   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:04.033926   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.033933   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.033936   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.035988   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.036468   33166 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:04.036486   33166 pod_ready.go:81] duration metric: took 5.179103ms for pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.036492   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.036532   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809
	I0531 18:44:04.036540   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.036547   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.036551   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.038564   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.039041   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:04.039055   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.039062   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.039065   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.041060   33166 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 18:44:04.041626   33166 pod_ready.go:92] pod "etcd-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:04.041646   33166 pod_ready.go:81] duration metric: took 5.147706ms for pod "etcd-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.041657   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.041716   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:44:04.041726   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.041736   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.041744   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.044464   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.044945   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:04.044981   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.044992   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.045000   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.047071   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.047512   33166 pod_ready.go:92] pod "etcd-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:04.047529   33166 pod_ready.go:81] duration metric: took 5.864641ms for pod "etcd-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.047536   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.208888   33166 request.go:629] Waited for 161.284399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:04.208974   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:04.208984   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.208997   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.209001   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.212729   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:04.408701   33166 request.go:629] Waited for 195.146959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.408790   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.408803   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.408817   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.408827   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.411794   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.608981   33166 request.go:629] Waited for 60.228904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:04.609055   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:04.609068   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.609079   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.609084   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.612604   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:04.808795   33166 request.go:629] Waited for 195.4035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.808862   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.808870   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.808878   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.808886   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.811742   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:05.047857   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:05.047878   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:05.047884   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:05.047889   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:05.055030   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:44:05.208258   33166 request.go:629] Waited for 152.332527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:05.208321   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:05.208327   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:05.208334   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:05.208338   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:05.211967   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:05.547870   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:05.547892   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:05.547900   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:05.547904   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:05.551384   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:05.608434   33166 request.go:629] Waited for 56.233241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:05.608498   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:05.608506   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:05.608517   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:05.608547   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:05.612256   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.048352   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:06.048376   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.048387   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.048395   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.051790   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.052643   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:06.052660   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.052670   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.052677   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.056064   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.056604   33166 pod_ready.go:92] pod "etcd-ha-080809-m03" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:06.056623   33166 pod_ready.go:81] duration metric: took 2.009077542s for pod "etcd-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.056646   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.207951   33166 request.go:629] Waited for 151.247089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809
	I0531 18:44:06.208026   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809
	I0531 18:44:06.208034   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.208042   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.208049   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.211710   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.408063   33166 request.go:629] Waited for 195.289266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:06.408116   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:06.408122   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.408129   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.408133   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.411772   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.412556   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:06.412580   33166 pod_ready.go:81] duration metric: took 355.922336ms for pod "kube-apiserver-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.412593   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.608593   33166 request.go:629] Waited for 195.920387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:44:06.608658   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:44:06.608664   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.608671   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.608674   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.612075   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.808197   33166 request.go:629] Waited for 195.395118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:06.808281   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:06.808288   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.808301   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.808306   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.811647   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.812382   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:06.812398   33166 pod_ready.go:81] duration metric: took 399.797761ms for pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.812407   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:07.008917   33166 request.go:629] Waited for 196.45327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.009038   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.009049   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.009056   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.009061   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.012335   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:07.208347   33166 request.go:629] Waited for 195.262714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:07.208400   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:07.208407   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.208417   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.208424   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.211735   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:07.408789   33166 request.go:629] Waited for 96.249579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.408849   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.408856   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.408869   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.408876   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.413184   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:07.608021   33166 request.go:629] Waited for 193.205589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:07.608099   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:07.608110   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.608121   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.608125   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.611239   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:07.812841   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.812870   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.812881   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.812889   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.816905   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.007966   33166 request.go:629] Waited for 190.306729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.008026   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.008033   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.008047   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.008072   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.011609   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.312868   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:08.312889   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.312896   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.312900   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.316364   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.408645   33166 request.go:629] Waited for 91.280051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.408716   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.408722   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.408733   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.408737   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.412254   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.813057   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:08.813080   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.813086   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.813090   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.816997   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.817867   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.817880   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.817890   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.817895   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.820539   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:08.821206   33166 pod_ready.go:102] pod "kube-apiserver-ha-080809-m03" in "kube-system" namespace has status "Ready":"False"
	I0531 18:44:09.313455   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:09.313477   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:09.313484   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:09.313489   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:09.318268   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:09.319391   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:09.319413   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:09.319424   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:09.319430   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:09.323063   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:09.813103   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:09.813132   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:09.813139   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:09.813144   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:09.817142   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:09.817858   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:09.817873   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:09.817882   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:09.817889   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:09.820621   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:10.313550   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:10.313575   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:10.313583   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:10.313585   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:10.317147   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:10.317961   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:10.317977   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:10.317984   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:10.317988   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:10.320632   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:10.812587   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:10.812611   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:10.812618   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:10.812623   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:10.816474   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:10.817285   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:10.817301   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:10.817308   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:10.817311   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:10.819792   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:11.312791   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:11.312814   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.312821   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.312825   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.316277   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:11.316899   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:11.316917   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.316928   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.316934   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.319740   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:11.320286   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809-m03" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:11.320304   33166 pod_ready.go:81] duration metric: took 4.507890838s for pod "kube-apiserver-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.320315   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.320370   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809
	I0531 18:44:11.320381   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.320391   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.320402   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.323073   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:11.323796   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:11.323811   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.323821   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.323828   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.325967   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:11.326488   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:11.326502   33166 pod_ready.go:81] duration metric: took 6.180855ms for pod "kube-controller-manager-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.326510   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.408760   33166 request.go:629] Waited for 82.194514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:44:11.408841   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:44:11.408852   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.408863   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.408870   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.412339   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:11.608509   33166 request.go:629] Waited for 195.392871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:11.608578   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:11.608583   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.608590   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.608593   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.612042   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:11.612779   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:11.612803   33166 pod_ready.go:81] duration metric: took 286.284988ms for pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.612816   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.808147   33166 request.go:629] Waited for 195.275334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m03
	I0531 18:44:11.808219   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m03
	I0531 18:44:11.808243   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.808253   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.808262   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.811885   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:12.007959   33166 request.go:629] Waited for 195.359718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:12.008057   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:12.008068   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.008079   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.008087   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.011646   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:12.012283   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809-m03" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:12.012301   33166 pod_ready.go:81] duration metric: took 399.477853ms for pod "kube-controller-manager-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.012311   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dmg2w" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.208853   33166 request.go:629] Waited for 196.469258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmg2w
	I0531 18:44:12.208976   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmg2w
	I0531 18:44:12.208985   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.208992   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.208998   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.211972   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:12.408385   33166 request.go:629] Waited for 195.420767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:12.408461   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:12.408469   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.408478   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.408484   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.415578   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:44:12.416365   33166 pod_ready.go:92] pod "kube-proxy-dmg2w" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:12.416391   33166 pod_ready.go:81] duration metric: took 404.072958ms for pod "kube-proxy-dmg2w" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.416405   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nzlz5" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.608390   33166 request.go:629] Waited for 191.914789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nzlz5
	I0531 18:44:12.608474   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nzlz5
	I0531 18:44:12.608482   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.608493   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.608502   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.612047   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:12.808208   33166 request.go:629] Waited for 195.34955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:12.808274   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:12.808279   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.808286   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.808290   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.811346   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:12.811776   33166 pod_ready.go:92] pod "kube-proxy-nzlz5" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:12.811796   33166 pod_ready.go:81] duration metric: took 395.38334ms for pod "kube-proxy-nzlz5" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.811807   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxq4h" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.008857   33166 request.go:629] Waited for 196.940044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxq4h
	I0531 18:44:13.008924   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxq4h
	I0531 18:44:13.008935   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.008946   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.008970   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.014184   33166 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 18:44:13.208051   33166 request.go:629] Waited for 193.284555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:13.208136   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:13.208147   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.208159   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.208168   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.212111   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:13.212873   33166 pod_ready.go:92] pod "kube-proxy-wxq4h" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:13.212893   33166 pod_ready.go:81] duration metric: took 401.079094ms for pod "kube-proxy-wxq4h" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.212901   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.408169   33166 request.go:629] Waited for 195.199419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809
	I0531 18:44:13.408268   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809
	I0531 18:44:13.408283   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.408291   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.408300   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.412365   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:13.608864   33166 request.go:629] Waited for 195.801565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:13.608934   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:13.608944   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.608967   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.608999   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.612469   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:13.613071   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:13.613093   33166 pod_ready.go:81] duration metric: took 400.185239ms for pod "kube-scheduler-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.613106   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.807931   33166 request.go:629] Waited for 194.751128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m02
	I0531 18:44:13.808013   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m02
	I0531 18:44:13.808021   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.808030   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.808038   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.811170   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:14.008050   33166 request.go:629] Waited for 196.214799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:14.008128   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:14.008144   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.008154   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.008162   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.011698   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:14.012494   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:14.012515   33166 pod_ready.go:81] duration metric: took 399.402939ms for pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:14.012525   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:14.208543   33166 request.go:629] Waited for 195.940009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m03
	I0531 18:44:14.208597   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m03
	I0531 18:44:14.208606   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.208614   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.208618   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.211615   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:14.408286   33166 request.go:629] Waited for 195.991533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:14.408351   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:14.408358   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.408368   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.408373   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.411642   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:14.412320   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809-m03" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:14.412341   33166 pod_ready.go:81] duration metric: took 399.80943ms for pod "kube-scheduler-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:14.412354   33166 pod_ready.go:38] duration metric: took 10.400932137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:44:14.412372   33166 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:44:14.412430   33166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:44:14.430933   33166 api_server.go:72] duration metric: took 18.677854715s to wait for apiserver process to appear ...
	I0531 18:44:14.430969   33166 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:44:14.431007   33166 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0531 18:44:14.437802   33166 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0531 18:44:14.437884   33166 round_trippers.go:463] GET https://192.168.39.2:8443/version
	I0531 18:44:14.437896   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.437908   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.437919   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.438850   33166 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0531 18:44:14.438917   33166 api_server.go:141] control plane version: v1.30.1
	I0531 18:44:14.438934   33166 api_server.go:131] duration metric: took 7.957476ms to wait for apiserver health ...
	I0531 18:44:14.438945   33166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:44:14.608375   33166 request.go:629] Waited for 169.342969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:14.608444   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:14.608452   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.608465   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.608473   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.616151   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:44:14.622539   33166 system_pods.go:59] 24 kube-system pods found
	I0531 18:44:14.622568   33166 system_pods.go:61] "coredns-7db6d8ff4d-cr2wg" [9f90f810-88d7-443c-8537-1eb0c65417be] Running
	I0531 18:44:14.622575   33166 system_pods.go:61] "coredns-7db6d8ff4d-kbkpk" [1de171b4-ac9e-42d1-be02-d585d0d01587] Running
	I0531 18:44:14.622581   33166 system_pods.go:61] "etcd-ha-080809" [56a31782-586f-4031-b3af-14b3079465c9] Running
	I0531 18:44:14.622586   33166 system_pods.go:61] "etcd-ha-080809-m02" [6b4fa1c2-be34-4fa6-b600-f5214c23b9de] Running
	I0531 18:44:14.622590   33166 system_pods.go:61] "etcd-ha-080809-m03" [f9812a68-fb91-42d9-ba6c-761c909c8b1d] Running
	I0531 18:44:14.622595   33166 system_pods.go:61] "kindnet-6jqp4" [97ee357b-3e8c-411b-bc52-f6efdce9b8ed] Running
	I0531 18:44:14.622599   33166 system_pods.go:61] "kindnet-fzzfx" [1f8aa910-a7fc-44cf-88ad-dbf14049ac1d] Running
	I0531 18:44:14.622608   33166 system_pods.go:61] "kindnet-qzfbh" [30e56f89-1f34-4bc0-9728-29f158a091b4] Running
	I0531 18:44:14.622612   33166 system_pods.go:61] "kube-apiserver-ha-080809" [d927093c-9ff3-4405-989e-32aaead2e89a] Running
	I0531 18:44:14.622616   33166 system_pods.go:61] "kube-apiserver-ha-080809-m02" [0f1c03da-2e74-469e-8e35-4bb51cdb69e2] Running
	I0531 18:44:14.622622   33166 system_pods.go:61] "kube-apiserver-ha-080809-m03" [a8246e62-434e-4572-8943-8ab28a126794] Running
	I0531 18:44:14.622627   33166 system_pods.go:61] "kube-controller-manager-ha-080809" [f267f923-1e85-413b-be64-a45fe7a71acf] Running
	I0531 18:44:14.622634   33166 system_pods.go:61] "kube-controller-manager-ha-080809-m02" [10424961-670c-4578-8be1-48480d19c17b] Running
	I0531 18:44:14.622641   33166 system_pods.go:61] "kube-controller-manager-ha-080809-m03" [74632272-0a23-4df1-aaae-675ab9550e19] Running
	I0531 18:44:14.622649   33166 system_pods.go:61] "kube-proxy-dmg2w" [50600abe-9fae-41f8-8ff2-05c8407163c0] Running
	I0531 18:44:14.622653   33166 system_pods.go:61] "kube-proxy-nzlz5" [d59a1a71-3dc2-4cf6-9e01-b61fa8f26053] Running
	I0531 18:44:14.622656   33166 system_pods.go:61] "kube-proxy-wxq4h" [f8c4e91b-a913-4baf-ad50-44d14b9d6792] Running
	I0531 18:44:14.622659   33166 system_pods.go:61] "kube-scheduler-ha-080809" [e739cfb0-a779-4262-8617-8f66e77c9b14] Running
	I0531 18:44:14.622662   33166 system_pods.go:61] "kube-scheduler-ha-080809-m02" [f7dcba0b-7dbd-44df-8344-a5c36a3437b6] Running
	I0531 18:44:14.622665   33166 system_pods.go:61] "kube-scheduler-ha-080809-m03" [e9840dfe-2860-4c23-af6a-c45ebd36869e] Running
	I0531 18:44:14.622669   33166 system_pods.go:61] "kube-vip-ha-080809" [12f7c64f-8e4d-4c2d-ae6f-13fc270d732b] Running
	I0531 18:44:14.622672   33166 system_pods.go:61] "kube-vip-ha-080809-m02" [7370bb36-f4fc-4885-a7f1-c7db2f6ae58d] Running
	I0531 18:44:14.622674   33166 system_pods.go:61] "kube-vip-ha-080809-m03" [b152f6a1-3708-4f12-82f5-385eb59b3572] Running
	I0531 18:44:14.622677   33166 system_pods.go:61] "storage-provisioner" [11521630-b23f-4574-9eda-8d0e5d0b7e10] Running
	I0531 18:44:14.622683   33166 system_pods.go:74] duration metric: took 183.729019ms to wait for pod list to return data ...
	I0531 18:44:14.622692   33166 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:44:14.807953   33166 request.go:629] Waited for 185.190851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:44:14.808014   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:44:14.808019   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.808027   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.808035   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.811075   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:14.811211   33166 default_sa.go:45] found service account: "default"
	I0531 18:44:14.811232   33166 default_sa.go:55] duration metric: took 188.533375ms for default service account to be created ...
	I0531 18:44:14.811243   33166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:44:15.008550   33166 request.go:629] Waited for 197.237981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:15.008669   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:15.008681   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:15.008691   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:15.008702   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:15.015400   33166 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0531 18:44:15.022687   33166 system_pods.go:86] 24 kube-system pods found
	I0531 18:44:15.022713   33166 system_pods.go:89] "coredns-7db6d8ff4d-cr2wg" [9f90f810-88d7-443c-8537-1eb0c65417be] Running
	I0531 18:44:15.022719   33166 system_pods.go:89] "coredns-7db6d8ff4d-kbkpk" [1de171b4-ac9e-42d1-be02-d585d0d01587] Running
	I0531 18:44:15.022723   33166 system_pods.go:89] "etcd-ha-080809" [56a31782-586f-4031-b3af-14b3079465c9] Running
	I0531 18:44:15.022728   33166 system_pods.go:89] "etcd-ha-080809-m02" [6b4fa1c2-be34-4fa6-b600-f5214c23b9de] Running
	I0531 18:44:15.022732   33166 system_pods.go:89] "etcd-ha-080809-m03" [f9812a68-fb91-42d9-ba6c-761c909c8b1d] Running
	I0531 18:44:15.022736   33166 system_pods.go:89] "kindnet-6jqp4" [97ee357b-3e8c-411b-bc52-f6efdce9b8ed] Running
	I0531 18:44:15.022739   33166 system_pods.go:89] "kindnet-fzzfx" [1f8aa910-a7fc-44cf-88ad-dbf14049ac1d] Running
	I0531 18:44:15.022744   33166 system_pods.go:89] "kindnet-qzfbh" [30e56f89-1f34-4bc0-9728-29f158a091b4] Running
	I0531 18:44:15.022751   33166 system_pods.go:89] "kube-apiserver-ha-080809" [d927093c-9ff3-4405-989e-32aaead2e89a] Running
	I0531 18:44:15.022757   33166 system_pods.go:89] "kube-apiserver-ha-080809-m02" [0f1c03da-2e74-469e-8e35-4bb51cdb69e2] Running
	I0531 18:44:15.022763   33166 system_pods.go:89] "kube-apiserver-ha-080809-m03" [a8246e62-434e-4572-8943-8ab28a126794] Running
	I0531 18:44:15.022770   33166 system_pods.go:89] "kube-controller-manager-ha-080809" [f267f923-1e85-413b-be64-a45fe7a71acf] Running
	I0531 18:44:15.022780   33166 system_pods.go:89] "kube-controller-manager-ha-080809-m02" [10424961-670c-4578-8be1-48480d19c17b] Running
	I0531 18:44:15.022790   33166 system_pods.go:89] "kube-controller-manager-ha-080809-m03" [74632272-0a23-4df1-aaae-675ab9550e19] Running
	I0531 18:44:15.022797   33166 system_pods.go:89] "kube-proxy-dmg2w" [50600abe-9fae-41f8-8ff2-05c8407163c0] Running
	I0531 18:44:15.022801   33166 system_pods.go:89] "kube-proxy-nzlz5" [d59a1a71-3dc2-4cf6-9e01-b61fa8f26053] Running
	I0531 18:44:15.022807   33166 system_pods.go:89] "kube-proxy-wxq4h" [f8c4e91b-a913-4baf-ad50-44d14b9d6792] Running
	I0531 18:44:15.022810   33166 system_pods.go:89] "kube-scheduler-ha-080809" [e739cfb0-a779-4262-8617-8f66e77c9b14] Running
	I0531 18:44:15.022816   33166 system_pods.go:89] "kube-scheduler-ha-080809-m02" [f7dcba0b-7dbd-44df-8344-a5c36a3437b6] Running
	I0531 18:44:15.022821   33166 system_pods.go:89] "kube-scheduler-ha-080809-m03" [e9840dfe-2860-4c23-af6a-c45ebd36869e] Running
	I0531 18:44:15.022827   33166 system_pods.go:89] "kube-vip-ha-080809" [12f7c64f-8e4d-4c2d-ae6f-13fc270d732b] Running
	I0531 18:44:15.022830   33166 system_pods.go:89] "kube-vip-ha-080809-m02" [7370bb36-f4fc-4885-a7f1-c7db2f6ae58d] Running
	I0531 18:44:15.022836   33166 system_pods.go:89] "kube-vip-ha-080809-m03" [b152f6a1-3708-4f12-82f5-385eb59b3572] Running
	I0531 18:44:15.022839   33166 system_pods.go:89] "storage-provisioner" [11521630-b23f-4574-9eda-8d0e5d0b7e10] Running
	I0531 18:44:15.022851   33166 system_pods.go:126] duration metric: took 211.598428ms to wait for k8s-apps to be running ...
	I0531 18:44:15.022864   33166 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:44:15.022915   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:44:15.038466   33166 system_svc.go:56] duration metric: took 15.595196ms WaitForService to wait for kubelet
	I0531 18:44:15.038491   33166 kubeadm.go:576] duration metric: took 19.285419331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:44:15.038508   33166 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:44:15.208050   33166 request.go:629] Waited for 169.475176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes
	I0531 18:44:15.208101   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes
	I0531 18:44:15.208113   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:15.208124   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:15.208132   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:15.211884   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:15.212898   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:44:15.212921   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:44:15.212932   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:44:15.212937   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:44:15.212941   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:44:15.212944   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:44:15.212948   33166 node_conditions.go:105] duration metric: took 174.435487ms to run NodePressure ...
	I0531 18:44:15.212972   33166 start.go:240] waiting for startup goroutines ...
	I0531 18:44:15.213001   33166 start.go:254] writing updated cluster config ...
	I0531 18:44:15.213301   33166 ssh_runner.go:195] Run: rm -f paused
	I0531 18:44:15.263986   33166 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0531 18:44:15.265792   33166 out.go:177] * Done! kubectl is now configured to use "ha-080809" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.693502173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717181264693480851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145840,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=47f101bc-c086-44fd-af26-a8d8348e6d6b name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.694324422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e924c55-5223-4339-908a-7e29262c118f name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.694402729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e924c55-5223-4339-908a-7e29262c118f name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.694610181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492,PodSandboxId:ddcee0fba28183cc0292a1265bba537d1781896b67f0fca5dedbf6abc94eef26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913363441596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53f4842200c266d40d09c15c6a7a4f03e9f89d6e5275c2a127f5f10ea5de2066,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717180913290411484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679,PodSandboxId:cc0c09dd14d70cb4b5097468dcdc6c222a3d5c3c19edf24d4375b915dc7f63df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913311381965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-8
8d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:C
ONTAINER_RUNNING,CreatedAt:1717180911840390648,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:17171809
08041443440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866bb4e7c05a49b19e6b793aae277fe300ac3aca7345ea02a4210e729a1572f9,PodSandboxId:74e5f430b53600e253bc5f09df245672c907d7addc770aee32876f7b023777c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717180890
170934119,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23456d557e5f95caef8eb617de3db695,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e,PodSandboxId:4b398e7c07ccc08e352ea0fefe48eabd28b3468925afef6ee1e7d56d4910600b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717180887203523479,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801,PodSandboxId:a91f5f7e726356876499c8c2d9ffac1b260c7956211cb5ca371e9e7ba75fa308,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717180887133191821,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717180887210303049,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a,PodSandboxId:0f4f1de81138d5effe6b3d5dd6bfbdfd2fb3b1a8587606a8431c04bf53f8b8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717180887090232010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.n
ame: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e924c55-5223-4339-908a-7e29262c118f name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.732305451Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3d4b70d2-1b66-467e-92eb-7475d67fd53d name=/runtime.v1.RuntimeService/Version
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.732380088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3d4b70d2-1b66-467e-92eb-7475d67fd53d name=/runtime.v1.RuntimeService/Version
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.734253137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1d1db7a-4c79-4fc9-b431-e0f18fbf5dfe name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.735036667Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717181264735009654,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145840,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1d1db7a-4c79-4fc9-b431-e0f18fbf5dfe name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.735789217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c8aab40-0d99-4c93-9e48-3b426ecc4b9f name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.735886779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c8aab40-0d99-4c93-9e48-3b426ecc4b9f name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.736122628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492,PodSandboxId:ddcee0fba28183cc0292a1265bba537d1781896b67f0fca5dedbf6abc94eef26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913363441596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53f4842200c266d40d09c15c6a7a4f03e9f89d6e5275c2a127f5f10ea5de2066,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717180913290411484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679,PodSandboxId:cc0c09dd14d70cb4b5097468dcdc6c222a3d5c3c19edf24d4375b915dc7f63df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913311381965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-8
8d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:C
ONTAINER_RUNNING,CreatedAt:1717180911840390648,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:17171809
08041443440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866bb4e7c05a49b19e6b793aae277fe300ac3aca7345ea02a4210e729a1572f9,PodSandboxId:74e5f430b53600e253bc5f09df245672c907d7addc770aee32876f7b023777c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717180890
170934119,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23456d557e5f95caef8eb617de3db695,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e,PodSandboxId:4b398e7c07ccc08e352ea0fefe48eabd28b3468925afef6ee1e7d56d4910600b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717180887203523479,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801,PodSandboxId:a91f5f7e726356876499c8c2d9ffac1b260c7956211cb5ca371e9e7ba75fa308,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717180887133191821,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717180887210303049,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a,PodSandboxId:0f4f1de81138d5effe6b3d5dd6bfbdfd2fb3b1a8587606a8431c04bf53f8b8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717180887090232010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.n
ame: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c8aab40-0d99-4c93-9e48-3b426ecc4b9f name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.774942369Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b097eb5-cfe6-4abb-8ec1-ed6f36d11761 name=/runtime.v1.RuntimeService/Version
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.775037693Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b097eb5-cfe6-4abb-8ec1-ed6f36d11761 name=/runtime.v1.RuntimeService/Version
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.776644061Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a7b02d6-3074-4780-897d-9bc9c62f3a3f name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.777365858Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717181264777339302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145840,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a7b02d6-3074-4780-897d-9bc9c62f3a3f name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.778019752Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c7226857-1857-4cd2-98c0-9fbff0378676 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.778071646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c7226857-1857-4cd2-98c0-9fbff0378676 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.778292068Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492,PodSandboxId:ddcee0fba28183cc0292a1265bba537d1781896b67f0fca5dedbf6abc94eef26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913363441596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53f4842200c266d40d09c15c6a7a4f03e9f89d6e5275c2a127f5f10ea5de2066,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717180913290411484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679,PodSandboxId:cc0c09dd14d70cb4b5097468dcdc6c222a3d5c3c19edf24d4375b915dc7f63df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913311381965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-8
8d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:C
ONTAINER_RUNNING,CreatedAt:1717180911840390648,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:17171809
08041443440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866bb4e7c05a49b19e6b793aae277fe300ac3aca7345ea02a4210e729a1572f9,PodSandboxId:74e5f430b53600e253bc5f09df245672c907d7addc770aee32876f7b023777c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717180890
170934119,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23456d557e5f95caef8eb617de3db695,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e,PodSandboxId:4b398e7c07ccc08e352ea0fefe48eabd28b3468925afef6ee1e7d56d4910600b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717180887203523479,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801,PodSandboxId:a91f5f7e726356876499c8c2d9ffac1b260c7956211cb5ca371e9e7ba75fa308,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717180887133191821,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717180887210303049,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a,PodSandboxId:0f4f1de81138d5effe6b3d5dd6bfbdfd2fb3b1a8587606a8431c04bf53f8b8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717180887090232010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.n
ame: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c7226857-1857-4cd2-98c0-9fbff0378676 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.818038747Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7e40bc3-f2aa-4fd2-8995-602004ec426f name=/runtime.v1.RuntimeService/Version
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.818120029Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7e40bc3-f2aa-4fd2-8995-602004ec426f name=/runtime.v1.RuntimeService/Version
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.819439436Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=331c1958-3f4d-4323-8376-c8e258f8fc7c name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.820502678Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717181264820475245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145840,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=331c1958-3f4d-4323-8376-c8e258f8fc7c name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.821187649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5ebea563-a613-4234-bb3a-3180c3c7aedb name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.821410694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5ebea563-a613-4234-bb3a-3180c3c7aedb name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:47:44 ha-080809 crio[683]: time="2024-05-31 18:47:44.821724564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492,PodSandboxId:ddcee0fba28183cc0292a1265bba537d1781896b67f0fca5dedbf6abc94eef26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913363441596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53f4842200c266d40d09c15c6a7a4f03e9f89d6e5275c2a127f5f10ea5de2066,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717180913290411484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679,PodSandboxId:cc0c09dd14d70cb4b5097468dcdc6c222a3d5c3c19edf24d4375b915dc7f63df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913311381965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-8
8d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:C
ONTAINER_RUNNING,CreatedAt:1717180911840390648,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:17171809
08041443440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866bb4e7c05a49b19e6b793aae277fe300ac3aca7345ea02a4210e729a1572f9,PodSandboxId:74e5f430b53600e253bc5f09df245672c907d7addc770aee32876f7b023777c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717180890
170934119,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23456d557e5f95caef8eb617de3db695,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e,PodSandboxId:4b398e7c07ccc08e352ea0fefe48eabd28b3468925afef6ee1e7d56d4910600b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717180887203523479,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801,PodSandboxId:a91f5f7e726356876499c8c2d9ffac1b260c7956211cb5ca371e9e7ba75fa308,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717180887133191821,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717180887210303049,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a,PodSandboxId:0f4f1de81138d5effe6b3d5dd6bfbdfd2fb3b1a8587606a8431c04bf53f8b8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717180887090232010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.n
ame: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5ebea563-a613-4234-bb3a-3180c3c7aedb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2c9526bf60d95       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     5 minutes ago       Running             coredns                   0                   ddcee0fba2818       coredns-7db6d8ff4d-kbkpk
	54e4715f49646       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     5 minutes ago       Running             coredns                   0                   cc0c09dd14d70       coredns-7db6d8ff4d-cr2wg
	53f4842200c26       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     5 minutes ago       Running             storage-provisioner       0                   8bd72a9b0e002       storage-provisioner
	eda951a1aef5d       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266   5 minutes ago       Running             kindnet-cni               0                   2bdbf3a323266       kindnet-fzzfx
	2442a39b94718       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                     5 minutes ago       Running             kube-proxy                0                   d23870754e4e8       kube-proxy-dmg2w
	866bb4e7c05a4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f    6 minutes ago       Running             kube-vip                  0                   74e5f430b5360       kube-vip-ha-080809
	817682d6f214f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                     6 minutes ago       Running             kube-scheduler            0                   a1254b0478464       kube-scheduler-ha-080809
	588c0be817c52       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                     6 minutes ago       Running             etcd                      0                   4b398e7c07ccc       etcd-ha-080809
	0d31a9e756e1e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                     6 minutes ago       Running             kube-controller-manager   0                   a91f5f7e72635       kube-controller-manager-ha-080809
	0116f67d37a6a       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                     6 minutes ago       Running             kube-apiserver            0                   0f4f1de81138d       kube-apiserver-ha-080809
	
	
	==> coredns [2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492] <==
	[INFO] 10.244.2.2:51434 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168245s
	[INFO] 10.244.2.2:51800 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001241702s
	[INFO] 10.244.2.2:41221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093978s
	[INFO] 10.244.2.2:54690 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080529s
	[INFO] 10.244.1.2:35391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115585s
	[INFO] 10.244.1.2:46555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001915248s
	[INFO] 10.244.1.2:57394 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151681s
	[INFO] 10.244.1.2:36090 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071582s
	[INFO] 10.244.1.3:39302 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112531s
	[INFO] 10.244.1.3:59251 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146082s
	[INFO] 10.244.1.3:44994 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186325s
	[INFO] 10.244.2.2:59669 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101916s
	[INFO] 10.244.2.2:48334 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107498s
	[INFO] 10.244.1.2:38765 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185939s
	[INFO] 10.244.1.2:60956 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128962s
	[INFO] 10.244.1.2:52932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087027s
	[INFO] 10.244.1.2:39378 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092276s
	[INFO] 10.244.1.3:44446 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000339346s
	[INFO] 10.244.2.2:46765 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109543s
	[INFO] 10.244.2.2:41182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131261s
	[INFO] 10.244.2.2:33464 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152439s
	[INFO] 10.244.1.2:42271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104723s
	[INFO] 10.244.1.2:46690 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066235s
	[INFO] 10.244.1.2:42389 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013689s
	[INFO] 10.244.1.2:38173 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011827s
	
	
	==> coredns [54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679] <==
	[INFO] 10.244.1.3:33000 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001660325s
	[INFO] 10.244.1.3:47322 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005658116s
	[INFO] 10.244.1.3:51140 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.014792944s
	[INFO] 10.244.2.2:55470 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001442856s
	[INFO] 10.244.2.2:34606 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083801s
	[INFO] 10.244.1.2:44192 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000087527s
	[INFO] 10.244.1.2:58109 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112644s
	[INFO] 10.244.1.3:48252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001588s
	[INFO] 10.244.1.3:54056 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269568s
	[INFO] 10.244.1.3:46632 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000285388s
	[INFO] 10.244.1.3:53120 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013333201s
	[INFO] 10.244.1.3:41820 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181168s
	[INFO] 10.244.2.2:45232 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101776s
	[INFO] 10.244.2.2:33325 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079862s
	[INFO] 10.244.1.2:50899 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130749s
	[INFO] 10.244.1.2:41406 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001416454s
	[INFO] 10.244.1.2:60468 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129757s
	[INFO] 10.244.1.2:54879 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126191s
	[INFO] 10.244.1.3:44719 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113779s
	[INFO] 10.244.2.2:45824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196144s
	[INFO] 10.244.2.2:49963 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095304s
	[INFO] 10.244.1.3:40266 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018931s
	[INFO] 10.244.1.3:43033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251756s
	[INFO] 10.244.1.3:51239 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000124133s
	[INFO] 10.244.2.2:45507 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.001299123s
	
	
	==> describe nodes <==
	Name:               ha-080809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_31T18_41_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:41:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:47:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:47:10 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:47:10 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:47:10 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:47:10 +0000   Fri, 31 May 2024 18:41:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-080809
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eb1bd8d3c42441893aa8670274ffb1e
	  System UUID:                0eb1bd8d-3c42-4418-93aa-8670274ffb1e
	  Boot ID:                    a71cdc12-b2dc-410e-bd89-30705a38ac97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-cr2wg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m58s
	  kube-system                 coredns-7db6d8ff4d-kbkpk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m58s
	  kube-system                 etcd-ha-080809                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m13s
	  kube-system                 kindnet-fzzfx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m58s
	  kube-system                 kube-apiserver-ha-080809             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-controller-manager-ha-080809    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-proxy-dmg2w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 kube-scheduler-ha-080809             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-vip-ha-080809                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m56s  kube-proxy       
	  Normal  Starting                 6m12s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m12s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m12s  kubelet          Node ha-080809 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m12s  kubelet          Node ha-080809 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m12s  kubelet          Node ha-080809 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m59s  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal  NodeReady                5m53s  kubelet          Node ha-080809 status is now: NodeReady
	  Normal  RegisteredNode           4m46s  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal  RegisteredNode           3m36s  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	
	
	Name:               ha-080809-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_42_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:42:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:45:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 18:44:43 +0000   Fri, 31 May 2024 18:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 18:44:43 +0000   Fri, 31 May 2024 18:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 18:44:43 +0000   Fri, 31 May 2024 18:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 18:44:43 +0000   Fri, 31 May 2024 18:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-080809-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 26fe5dde81c649a890a62a46837233ef
	  System UUID:                26fe5dde-81c6-49a8-90a6-2a46837233ef
	  Boot ID:                    ec159d72-949a-4946-858c-4fe0183a91d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dwnsz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  default                     busybox-fc5497c4f-wqwrw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-080809-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m3s
	  kube-system                 kindnet-6jqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m5s
	  kube-system                 kube-apiserver-ha-080809-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-controller-manager-ha-080809-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-proxy-wxq4h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m5s
	  kube-system                 kube-scheduler-ha-080809-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-vip-ha-080809-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m5s (x7 over 5m5s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m4s                 node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           4m46s                node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           3m36s                node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  NodeNotReady             111s                 node-controller  Node ha-080809-m02 status is now: NodeNotReady
	
	
	Name:               ha-080809-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_43_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:43:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:47:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:43:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:43:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:43:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:44:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-080809-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 547b87f87b134b9798b7d615ab502227
	  System UUID:                547b87f8-7b13-4b97-98b7-d615ab502227
	  Boot ID:                    8e35dd93-e6c7-4f99-9f16-d89c3fb275fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qtbxj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 etcd-ha-080809-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m51s
	  kube-system                 kindnet-qzfbh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m53s
	  kube-system                 kube-apiserver-ha-080809-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-ha-080809-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 kube-proxy-nzlz5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  kube-system                 kube-scheduler-ha-080809-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-vip-ha-080809-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m48s                  kube-proxy       
	  Normal  Starting                 3m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m53s (x9 over 3m53s)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m53s (x7 over 3m53s)  kubelet          Node ha-080809-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m53s (x7 over 3m53s)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m51s                  node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           3m36s                  node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	
	
	Name:               ha-080809-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_44_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:44:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:47:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:44:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:44:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:44:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:45:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-080809-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4babfa9b2c834bb583245811edaf0c5d
	  System UUID:                4babfa9b-2c83-4bb5-8324-5811edaf0c5d
	  Boot ID:                    b6f38232-474d-40b0-b4c0-a2b12b8f03ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gtssc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-mzhqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m52s (x2 over 2m52s)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s (x2 over 2m52s)  kubelet          Node ha-080809-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m52s (x2 over 2m52s)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  NodeReady                2m41s                  kubelet          Node ha-080809-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May31 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051744] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.498015] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.407254] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[May31 18:41] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.297552] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.055284] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057048] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.190072] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.115739] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.263739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.190448] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.324345] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.063717] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.978929] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075713] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.762573] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.211184] kauditd_printk_skb: 38 callbacks suppressed
	[May31 18:42] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e] <==
	{"level":"warn","ts":"2024-05-31T18:47:45.075711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.102923Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.111265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.115327Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.129293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.137183Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.155235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.160732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.161131Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.165359Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.176942Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.184745Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.193466Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.196502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.199706Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.212205Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.218688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.225926Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.230333Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.233935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.240727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.248268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.25594Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.260837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:47:45.293429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:47:45 up 6 min,  0 users,  load average: 0.53, 0.32, 0.16
	Linux ha-080809 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022] <==
	I0531 18:47:13.022939       1 main.go:250] Node ha-080809-m04 has CIDR [10.244.3.0/24] 
	I0531 18:47:23.038333       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0531 18:47:23.038376       1 main.go:227] handling current node
	I0531 18:47:23.038390       1 main.go:223] Handling node with IPs: map[192.168.39.51:{}]
	I0531 18:47:23.038398       1 main.go:250] Node ha-080809-m02 has CIDR [10.244.1.0/24] 
	I0531 18:47:23.038527       1 main.go:223] Handling node with IPs: map[192.168.39.132:{}]
	I0531 18:47:23.038559       1 main.go:250] Node ha-080809-m03 has CIDR [10.244.2.0/24] 
	I0531 18:47:23.038677       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0531 18:47:23.038707       1 main.go:250] Node ha-080809-m04 has CIDR [10.244.3.0/24] 
	I0531 18:47:33.047124       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0531 18:47:33.047264       1 main.go:227] handling current node
	I0531 18:47:33.047291       1 main.go:223] Handling node with IPs: map[192.168.39.51:{}]
	I0531 18:47:33.047377       1 main.go:250] Node ha-080809-m02 has CIDR [10.244.1.0/24] 
	I0531 18:47:33.047682       1 main.go:223] Handling node with IPs: map[192.168.39.132:{}]
	I0531 18:47:33.047711       1 main.go:250] Node ha-080809-m03 has CIDR [10.244.2.0/24] 
	I0531 18:47:33.047951       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0531 18:47:33.047980       1 main.go:250] Node ha-080809-m04 has CIDR [10.244.3.0/24] 
	I0531 18:47:43.067370       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0531 18:47:43.067552       1 main.go:227] handling current node
	I0531 18:47:43.067665       1 main.go:223] Handling node with IPs: map[192.168.39.51:{}]
	I0531 18:47:43.067791       1 main.go:250] Node ha-080809-m02 has CIDR [10.244.1.0/24] 
	I0531 18:47:43.068167       1 main.go:223] Handling node with IPs: map[192.168.39.132:{}]
	I0531 18:47:43.068272       1 main.go:250] Node ha-080809-m03 has CIDR [10.244.2.0/24] 
	I0531 18:47:43.068544       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0531 18:47:43.068666       1 main.go:250] Node ha-080809-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a] <==
	I0531 18:41:32.167967       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0531 18:41:32.177477       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.2]
	I0531 18:41:32.178289       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 18:41:32.182627       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:41:32.544463       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0531 18:41:33.450206       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0531 18:41:33.468495       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0531 18:41:33.482739       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0531 18:41:47.322720       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0531 18:41:47.475006       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0531 18:44:20.941354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41450: use of closed network connection
	E0531 18:44:21.128187       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41470: use of closed network connection
	E0531 18:44:21.298026       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41490: use of closed network connection
	E0531 18:44:21.500538       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41508: use of closed network connection
	E0531 18:44:21.680064       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41538: use of closed network connection
	E0531 18:44:21.891464       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41560: use of closed network connection
	E0531 18:44:22.066902       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41578: use of closed network connection
	E0531 18:44:22.251410       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41590: use of closed network connection
	E0531 18:44:22.430307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41600: use of closed network connection
	E0531 18:44:22.717157       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41622: use of closed network connection
	E0531 18:44:22.909319       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41640: use of closed network connection
	E0531 18:44:23.098133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41658: use of closed network connection
	E0531 18:44:23.270228       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41676: use of closed network connection
	E0531 18:44:23.455097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41700: use of closed network connection
	E0531 18:44:23.635398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41710: use of closed network connection
	
	
	==> kube-controller-manager [0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801] <==
	I0531 18:44:16.544059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="612.311µs"
	I0531 18:44:16.624131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.854735ms"
	I0531 18:44:16.624305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.636µs"
	I0531 18:44:16.697741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.223009ms"
	I0531 18:44:16.697967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="131.446µs"
	I0531 18:44:17.103792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.512µs"
	I0531 18:44:18.115478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.653µs"
	I0531 18:44:18.124779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.455µs"
	I0531 18:44:18.129445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.236µs"
	I0531 18:44:20.388943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.45821ms"
	I0531 18:44:20.390209       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="216.792µs"
	I0531 18:44:20.468536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.398456ms"
	I0531 18:44:20.469533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="266.293µs"
	E0531 18:44:53.633299       1 certificate_controller.go:146] Sync csr-jvd5s failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-jvd5s": the object has been modified; please apply your changes to the latest version and try again
	I0531 18:44:53.931545       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-080809-m04\" does not exist"
	I0531 18:44:53.946369       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-080809-m04" podCIDRs=["10.244.3.0/24"]
	I0531 18:44:56.716578       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-080809-m04"
	I0531 18:45:04.488190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-080809-m04"
	I0531 18:45:54.912166       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-080809-m04"
	I0531 18:45:54.968755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.679704ms"
	I0531 18:45:54.969057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.292µs"
	I0531 18:45:55.000471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.782809ms"
	I0531 18:45:55.000587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.046µs"
	I0531 18:45:55.043756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163559ms"
	I0531 18:45:55.043989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.535µs"
	
	
	==> kube-proxy [2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860] <==
	I0531 18:41:48.266762       1 server_linux.go:69] "Using iptables proxy"
	I0531 18:41:48.288049       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	I0531 18:41:48.334580       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0531 18:41:48.334639       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0531 18:41:48.334670       1 server_linux.go:165] "Using iptables Proxier"
	I0531 18:41:48.338365       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 18:41:48.338613       1 server.go:872] "Version info" version="v1.30.1"
	I0531 18:41:48.338654       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:41:48.340466       1 config.go:192] "Starting service config controller"
	I0531 18:41:48.340510       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0531 18:41:48.340554       1 config.go:101] "Starting endpoint slice config controller"
	I0531 18:41:48.340581       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0531 18:41:48.341961       1 config.go:319] "Starting node config controller"
	I0531 18:41:48.341997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0531 18:41:48.441094       1 shared_informer.go:320] Caches are synced for service config
	I0531 18:41:48.441214       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0531 18:41:48.442485       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3] <==
	E0531 18:41:31.700889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:41:31.714101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:41:31.714223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 18:41:31.733132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:41:31.733160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:41:31.737459       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:41:31.738059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:41:31.800135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:41:31.800182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:41:31.814109       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:41:31.814154       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:41:31.815296       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:41:31.815429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0531 18:41:34.722391       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0531 18:44:16.176693       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wqwrw\": pod busybox-fc5497c4f-wqwrw is already assigned to node \"ha-080809-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wqwrw" node="ha-080809-m02"
	E0531 18:44:16.176962       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e728434a-8e6b-4486-920a-cf6998e12761(default/busybox-fc5497c4f-wqwrw) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wqwrw"
	E0531 18:44:16.176998       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wqwrw\": pod busybox-fc5497c4f-wqwrw is already assigned to node \"ha-080809-m02\"" pod="default/busybox-fc5497c4f-wqwrw"
	I0531 18:44:16.177057       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wqwrw" node="ha-080809-m02"
	I0531 18:44:16.192752       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="43e80d7c-f669-4515-b508-8a3080ac7715" pod="default/busybox-fc5497c4f-dwnsz" assumedNode="ha-080809-m02" currentNode="ha-080809"
	E0531 18:44:16.199425       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dwnsz\": pod busybox-fc5497c4f-dwnsz is already assigned to node \"ha-080809-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-dwnsz" node="ha-080809"
	E0531 18:44:16.199503       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 43e80d7c-f669-4515-b508-8a3080ac7715(default/busybox-fc5497c4f-dwnsz) was assumed on ha-080809 but assigned to ha-080809-m02" pod="default/busybox-fc5497c4f-dwnsz"
	E0531 18:44:16.199528       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dwnsz\": pod busybox-fc5497c4f-dwnsz is already assigned to node \"ha-080809-m02\"" pod="default/busybox-fc5497c4f-dwnsz"
	I0531 18:44:16.199583       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-dwnsz" node="ha-080809-m02"
	E0531 18:44:53.997727       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wrk52\": pod kindnet-wrk52 is already assigned to node \"ha-080809-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wrk52" node="ha-080809-m04"
	E0531 18:44:53.997917       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wrk52\": pod kindnet-wrk52 is already assigned to node \"ha-080809-m04\"" pod="kube-system/kindnet-wrk52"
	
	
	==> kubelet <==
	May 31 18:44:16 ha-080809 kubelet[1363]: E0531 18:44:16.354426    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-rr5zl], unattached volumes=[], failed to process volumes=[]: context canceled" pod="default/busybox-fc5497c4f-zjg8z" podUID="3b87e550-1c8b-4088-ade5-4c2a922b3c22"
	May 31 18:44:17 ha-080809 kubelet[1363]: I0531 18:44:17.140685    1363 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr5zl\" (UniqueName: \"kubernetes.io/projected/3b87e550-1c8b-4088-ade5-4c2a922b3c22-kube-api-access-rr5zl\") pod \"3b87e550-1c8b-4088-ade5-4c2a922b3c22\" (UID: \"3b87e550-1c8b-4088-ade5-4c2a922b3c22\") "
	May 31 18:44:17 ha-080809 kubelet[1363]: I0531 18:44:17.147137    1363 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b87e550-1c8b-4088-ade5-4c2a922b3c22-kube-api-access-rr5zl" (OuterVolumeSpecName: "kube-api-access-rr5zl") pod "3b87e550-1c8b-4088-ade5-4c2a922b3c22" (UID: "3b87e550-1c8b-4088-ade5-4c2a922b3c22"). InnerVolumeSpecName "kube-api-access-rr5zl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 31 18:44:17 ha-080809 kubelet[1363]: I0531 18:44:17.241423    1363 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-rr5zl\" (UniqueName: \"kubernetes.io/projected/3b87e550-1c8b-4088-ade5-4c2a922b3c22-kube-api-access-rr5zl\") on node \"ha-080809\" DevicePath \"\""
	May 31 18:44:19 ha-080809 kubelet[1363]: I0531 18:44:19.414169    1363 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b87e550-1c8b-4088-ade5-4c2a922b3c22" path="/var/lib/kubelet/pods/3b87e550-1c8b-4088-ade5-4c2a922b3c22/volumes"
	May 31 18:44:33 ha-080809 kubelet[1363]: E0531 18:44:33.424151    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:44:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:44:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:44:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:44:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 18:45:33 ha-080809 kubelet[1363]: E0531 18:45:33.423735    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:45:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:45:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:45:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:45:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 18:46:33 ha-080809 kubelet[1363]: E0531 18:46:33.425397    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:46:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:46:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:46:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:46:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 18:47:33 ha-080809 kubelet[1363]: E0531 18:47:33.422224    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:47:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:47:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:47:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:47:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-080809 -n ha-080809
helpers_test.go:261: (dbg) Run:  kubectl --context ha-080809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 3 (3.202133209s)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:47:49.824010   37960 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:47:49.824244   37960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:49.824254   37960 out.go:304] Setting ErrFile to fd 2...
	I0531 18:47:49.824258   37960 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:49.824428   37960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:47:49.824580   37960 out.go:298] Setting JSON to false
	I0531 18:47:49.824604   37960 mustload.go:65] Loading cluster: ha-080809
	I0531 18:47:49.824653   37960 notify.go:220] Checking for updates...
	I0531 18:47:49.825091   37960 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:47:49.825110   37960 status.go:255] checking status of ha-080809 ...
	I0531 18:47:49.825942   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:49.825980   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:49.846653   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39395
	I0531 18:47:49.847029   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:49.847603   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:49.847626   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:49.848013   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:49.848216   37960 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:47:49.849730   37960 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 18:47:49.849753   37960 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:47:49.850052   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:49.850103   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:49.865266   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45121
	I0531 18:47:49.865624   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:49.866049   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:49.866067   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:49.866364   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:49.866578   37960 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:47:49.869402   37960 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:49.869844   37960 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:47:49.869880   37960 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:49.870006   37960 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:47:49.870300   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:49.870337   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:49.887052   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0531 18:47:49.887555   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:49.888082   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:49.888109   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:49.888400   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:49.888585   37960 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:47:49.888769   37960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:49.888799   37960 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:47:49.891231   37960 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:49.891685   37960 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:47:49.891709   37960 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:49.891862   37960 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:47:49.892030   37960 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:47:49.892192   37960 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:47:49.892329   37960 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:47:49.981393   37960 ssh_runner.go:195] Run: systemctl --version
	I0531 18:47:49.987497   37960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:50.002704   37960 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:47:50.002744   37960 api_server.go:166] Checking apiserver status ...
	I0531 18:47:50.002781   37960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:47:50.015673   37960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0531 18:47:50.024606   37960 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:47:50.024646   37960 ssh_runner.go:195] Run: ls
	I0531 18:47:50.029493   37960 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:47:50.035256   37960 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:47:50.035275   37960 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 18:47:50.035283   37960 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:50.035305   37960 status.go:255] checking status of ha-080809-m02 ...
	I0531 18:47:50.035667   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:50.035720   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:50.050475   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45247
	I0531 18:47:50.050854   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:50.051336   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:50.051354   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:50.051632   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:50.051818   37960 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:47:50.053300   37960 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 18:47:50.053318   37960 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:47:50.053607   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:50.053667   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:50.068278   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43101
	I0531 18:47:50.068612   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:50.069049   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:50.069067   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:50.069401   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:50.069586   37960 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:47:50.072345   37960 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:50.072787   37960 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:47:50.072812   37960 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:50.072917   37960 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:47:50.073216   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:50.073246   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:50.088208   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I0531 18:47:50.088535   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:50.088982   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:50.089004   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:50.089344   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:50.089532   37960 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:47:50.089668   37960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:50.089689   37960 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:47:50.091934   37960 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:50.092359   37960 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:47:50.092394   37960 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:50.092711   37960 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:47:50.092872   37960 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:47:50.093007   37960 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:47:50.093121   37960 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	W0531 18:47:52.637236   37960 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:47:52.637353   37960 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0531 18:47:52.637366   37960 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:47:52.637374   37960 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0531 18:47:52.637390   37960 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:47:52.637404   37960 status.go:255] checking status of ha-080809-m03 ...
	I0531 18:47:52.637698   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:52.637735   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:52.652888   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39743
	I0531 18:47:52.653305   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:52.653737   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:52.653758   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:52.654097   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:52.654262   37960 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:47:52.655760   37960 status.go:330] ha-080809-m03 host status = "Running" (err=<nil>)
	I0531 18:47:52.655774   37960 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:47:52.656086   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:52.656126   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:52.670472   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32867
	I0531 18:47:52.670848   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:52.671306   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:52.671328   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:52.671710   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:52.671907   37960 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:47:52.674613   37960 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:52.674983   37960 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:47:52.675014   37960 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:52.675112   37960 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:47:52.675506   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:52.675542   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:52.691291   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43805
	I0531 18:47:52.691756   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:52.692213   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:52.692233   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:52.692498   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:52.692670   37960 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:47:52.692860   37960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:52.692878   37960 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:47:52.695605   37960 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:52.695948   37960 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:47:52.695984   37960 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:52.696163   37960 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:47:52.696325   37960 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:47:52.696515   37960 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:47:52.696685   37960 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:47:52.783876   37960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:52.798661   37960 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:47:52.798685   37960 api_server.go:166] Checking apiserver status ...
	I0531 18:47:52.798713   37960 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:47:52.813150   37960 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0531 18:47:52.822371   37960 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:47:52.822438   37960 ssh_runner.go:195] Run: ls
	I0531 18:47:52.827268   37960 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:47:52.831925   37960 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:47:52.831947   37960 status.go:422] ha-080809-m03 apiserver status = Running (err=<nil>)
	I0531 18:47:52.831959   37960 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:52.831983   37960 status.go:255] checking status of ha-080809-m04 ...
	I0531 18:47:52.832357   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:52.832398   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:52.847761   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46143
	I0531 18:47:52.848146   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:52.848634   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:52.848664   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:52.848987   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:52.849190   37960 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:47:52.850780   37960 status.go:330] ha-080809-m04 host status = "Running" (err=<nil>)
	I0531 18:47:52.850797   37960 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:47:52.851204   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:52.851251   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:52.866162   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41011
	I0531 18:47:52.866582   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:52.867025   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:52.867047   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:52.867391   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:52.867592   37960 main.go:141] libmachine: (ha-080809-m04) Calling .GetIP
	I0531 18:47:52.870243   37960 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:52.870694   37960 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:47:52.870721   37960 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:52.870846   37960 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:47:52.871237   37960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:52.871285   37960 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:52.885526   37960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0531 18:47:52.885916   37960 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:52.886409   37960 main.go:141] libmachine: Using API Version  1
	I0531 18:47:52.886428   37960 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:52.886721   37960 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:52.886860   37960 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:47:52.887085   37960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:52.887101   37960 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:47:52.889451   37960 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:52.889813   37960 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:47:52.889848   37960 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:52.889978   37960 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:47:52.890137   37960 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:47:52.890282   37960 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:47:52.890385   37960 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:47:52.973332   37960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:52.988200   37960 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 3 (2.36279089s)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:47:53.738175   38061 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:47:53.738294   38061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:53.738304   38061 out.go:304] Setting ErrFile to fd 2...
	I0531 18:47:53.738311   38061 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:53.738480   38061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:47:53.738668   38061 out.go:298] Setting JSON to false
	I0531 18:47:53.738693   38061 mustload.go:65] Loading cluster: ha-080809
	I0531 18:47:53.738721   38061 notify.go:220] Checking for updates...
	I0531 18:47:53.739084   38061 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:47:53.739101   38061 status.go:255] checking status of ha-080809 ...
	I0531 18:47:53.739497   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:53.739586   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:53.758845   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0531 18:47:53.759312   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:53.759873   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:53.759900   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:53.760271   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:53.760476   38061 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:47:53.762177   38061 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 18:47:53.762197   38061 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:47:53.762487   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:53.762522   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:53.777491   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32943
	I0531 18:47:53.777870   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:53.778299   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:53.778319   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:53.778637   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:53.778802   38061 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:47:53.781694   38061 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:53.782131   38061 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:47:53.782159   38061 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:53.782287   38061 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:47:53.782579   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:53.782628   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:53.797454   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32993
	I0531 18:47:53.797800   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:53.798206   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:53.798226   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:53.798510   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:53.798752   38061 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:47:53.798920   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:53.798961   38061 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:47:53.801682   38061 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:53.802112   38061 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:47:53.802143   38061 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:53.802230   38061 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:47:53.802394   38061 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:47:53.802564   38061 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:47:53.802711   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:47:53.889410   38061 ssh_runner.go:195] Run: systemctl --version
	I0531 18:47:53.901111   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:53.921446   38061 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:47:53.921480   38061 api_server.go:166] Checking apiserver status ...
	I0531 18:47:53.921511   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:47:53.935617   38061 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0531 18:47:53.945035   38061 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:47:53.945095   38061 ssh_runner.go:195] Run: ls
	I0531 18:47:53.949479   38061 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:47:53.953556   38061 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:47:53.953576   38061 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 18:47:53.953584   38061 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:53.953599   38061 status.go:255] checking status of ha-080809-m02 ...
	I0531 18:47:53.953877   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:53.953912   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:53.969819   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38831
	I0531 18:47:53.970319   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:53.970865   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:53.970893   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:53.971257   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:53.971453   38061 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:47:53.973140   38061 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 18:47:53.973158   38061 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:47:53.973436   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:53.973474   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:53.988798   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33541
	I0531 18:47:53.989168   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:53.989619   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:53.989643   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:53.989946   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:53.990141   38061 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:47:53.992572   38061 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:53.993045   38061 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:47:53.993065   38061 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:53.993192   38061 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:47:53.993473   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:53.993504   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:54.007965   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33163
	I0531 18:47:54.008395   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:54.008863   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:54.008885   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:54.009175   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:54.009370   38061 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:47:54.009567   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:54.009590   38061 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:47:54.012472   38061 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:54.012878   38061 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:47:54.012908   38061 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:54.013145   38061 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:47:54.013300   38061 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:47:54.013429   38061 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:47:54.013550   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	W0531 18:47:55.713238   38061 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:47:55.713317   38061 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0531 18:47:55.713332   38061 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:47:55.713341   38061 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0531 18:47:55.713356   38061 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:47:55.713383   38061 status.go:255] checking status of ha-080809-m03 ...
	I0531 18:47:55.713666   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:55.713712   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:55.728484   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33105
	I0531 18:47:55.728869   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:55.729381   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:55.729404   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:55.729689   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:55.729893   38061 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:47:55.731442   38061 status.go:330] ha-080809-m03 host status = "Running" (err=<nil>)
	I0531 18:47:55.731455   38061 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:47:55.731736   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:55.731778   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:55.745799   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39787
	I0531 18:47:55.746135   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:55.746604   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:55.746628   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:55.746923   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:55.747114   38061 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:47:55.749830   38061 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:55.750240   38061 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:47:55.750278   38061 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:55.750433   38061 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:47:55.750717   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:55.750759   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:55.766257   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41661
	I0531 18:47:55.766635   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:55.767089   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:55.767106   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:55.767402   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:55.767575   38061 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:47:55.767745   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:55.767767   38061 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:47:55.770265   38061 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:55.770676   38061 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:47:55.770706   38061 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:47:55.770831   38061 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:47:55.771015   38061 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:47:55.771172   38061 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:47:55.771348   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:47:55.858194   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:55.875029   38061 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:47:55.875059   38061 api_server.go:166] Checking apiserver status ...
	I0531 18:47:55.875102   38061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:47:55.889852   38061 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0531 18:47:55.899352   38061 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:47:55.899402   38061 ssh_runner.go:195] Run: ls
	I0531 18:47:55.903925   38061 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:47:55.907981   38061 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:47:55.908009   38061 status.go:422] ha-080809-m03 apiserver status = Running (err=<nil>)
	I0531 18:47:55.908017   38061 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:55.908035   38061 status.go:255] checking status of ha-080809-m04 ...
	I0531 18:47:55.908336   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:55.908374   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:55.924475   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0531 18:47:55.924880   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:55.925384   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:55.925406   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:55.925733   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:55.925910   38061 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:47:55.927317   38061 status.go:330] ha-080809-m04 host status = "Running" (err=<nil>)
	I0531 18:47:55.927334   38061 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:47:55.927600   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:55.927630   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:55.941423   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40951
	I0531 18:47:55.941763   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:55.942136   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:55.942153   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:55.942439   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:55.942622   38061 main.go:141] libmachine: (ha-080809-m04) Calling .GetIP
	I0531 18:47:55.945131   38061 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:55.945531   38061 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:47:55.945560   38061 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:55.945681   38061 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:47:55.945985   38061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:55.946018   38061 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:55.960082   38061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0531 18:47:55.960399   38061 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:55.960856   38061 main.go:141] libmachine: Using API Version  1
	I0531 18:47:55.960876   38061 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:55.961211   38061 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:55.961390   38061 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:47:55.961567   38061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:55.961589   38061 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:47:55.964223   38061 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:55.964628   38061 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:47:55.964646   38061 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:47:55.964815   38061 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:47:55.964950   38061 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:47:55.965093   38061 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:47:55.965199   38061 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:47:56.044658   38061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:56.059576   38061 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 3 (4.747847084s)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:47:57.499986   38145 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:47:57.500102   38145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:57.500113   38145 out.go:304] Setting ErrFile to fd 2...
	I0531 18:47:57.500120   38145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:47:57.500291   38145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:47:57.500460   38145 out.go:298] Setting JSON to false
	I0531 18:47:57.500485   38145 mustload.go:65] Loading cluster: ha-080809
	I0531 18:47:57.500580   38145 notify.go:220] Checking for updates...
	I0531 18:47:57.500883   38145 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:47:57.500899   38145 status.go:255] checking status of ha-080809 ...
	I0531 18:47:57.501362   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:57.501405   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:57.520797   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42455
	I0531 18:47:57.521283   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:57.521840   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:47:57.521862   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:57.522209   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:57.522427   38145 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:47:57.524129   38145 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 18:47:57.524146   38145 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:47:57.524533   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:57.524575   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:57.540086   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38473
	I0531 18:47:57.540463   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:57.540866   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:47:57.540883   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:57.541187   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:57.541361   38145 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:47:57.544156   38145 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:57.544578   38145 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:47:57.544603   38145 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:57.544744   38145 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:47:57.545116   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:57.545150   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:57.559636   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38593
	I0531 18:47:57.559957   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:57.560367   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:47:57.560385   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:57.560654   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:57.560847   38145 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:47:57.561025   38145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:57.561066   38145 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:47:57.563345   38145 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:57.563718   38145 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:47:57.563748   38145 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:47:57.563865   38145 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:47:57.564045   38145 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:47:57.564182   38145 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:47:57.564325   38145 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:47:57.649905   38145 ssh_runner.go:195] Run: systemctl --version
	I0531 18:47:57.656029   38145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:47:57.670942   38145 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:47:57.670968   38145 api_server.go:166] Checking apiserver status ...
	I0531 18:47:57.670994   38145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:47:57.688210   38145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0531 18:47:57.698220   38145 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:47:57.698254   38145 ssh_runner.go:195] Run: ls
	I0531 18:47:57.702855   38145 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:47:57.706759   38145 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:47:57.706776   38145 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 18:47:57.706785   38145 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:47:57.706798   38145 status.go:255] checking status of ha-080809-m02 ...
	I0531 18:47:57.707111   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:57.707153   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:57.721715   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0531 18:47:57.722108   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:57.722547   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:47:57.722567   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:57.722884   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:57.723058   38145 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:47:57.724516   38145 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 18:47:57.724530   38145 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:47:57.724795   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:57.724827   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:57.739582   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35885
	I0531 18:47:57.739919   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:57.740316   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:47:57.740339   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:57.740663   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:57.740834   38145 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:47:57.743675   38145 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:57.744208   38145 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:47:57.744250   38145 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:57.744408   38145 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:47:57.744803   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:47:57.744843   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:47:57.759756   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38935
	I0531 18:47:57.760075   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:47:57.760451   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:47:57.760475   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:47:57.760839   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:47:57.761065   38145 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:47:57.761258   38145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:47:57.761275   38145 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:47:57.763509   38145 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:57.763904   38145 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:47:57.763936   38145 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:47:57.764062   38145 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:47:57.764213   38145 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:47:57.764358   38145 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:47:57.764497   38145 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	W0531 18:47:58.785194   38145 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:47:58.785267   38145 retry.go:31] will retry after 366.793114ms: dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:48:01.853364   38145 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:48:01.853478   38145 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0531 18:48:01.853503   38145 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:01.853514   38145 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0531 18:48:01.853546   38145 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:01.853570   38145 status.go:255] checking status of ha-080809-m03 ...
	I0531 18:48:01.853895   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:01.853949   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:01.869581   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42713
	I0531 18:48:01.870104   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:01.870580   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:48:01.870601   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:01.870975   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:01.871145   38145 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:48:01.872923   38145 status.go:330] ha-080809-m03 host status = "Running" (err=<nil>)
	I0531 18:48:01.872941   38145 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:01.873248   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:01.873287   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:01.887127   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40419
	I0531 18:48:01.887475   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:01.887870   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:48:01.887888   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:01.888199   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:01.888367   38145 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:48:01.891021   38145 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:01.891427   38145 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:01.891450   38145 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:01.891613   38145 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:01.892002   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:01.892056   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:01.905692   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43485
	I0531 18:48:01.906056   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:01.906504   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:48:01.906538   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:01.906877   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:01.907075   38145 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:48:01.907261   38145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:01.907286   38145 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:48:01.909772   38145 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:01.910172   38145 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:01.910196   38145 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:01.910364   38145 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:48:01.910528   38145 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:48:01.910668   38145 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:48:01.910808   38145 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:48:01.998926   38145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:02.016685   38145 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:02.016719   38145 api_server.go:166] Checking apiserver status ...
	I0531 18:48:02.016760   38145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:02.031712   38145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0531 18:48:02.040934   38145 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:02.040996   38145 ssh_runner.go:195] Run: ls
	I0531 18:48:02.045150   38145 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:02.049606   38145 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:02.049627   38145 status.go:422] ha-080809-m03 apiserver status = Running (err=<nil>)
	I0531 18:48:02.049637   38145 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:02.049651   38145 status.go:255] checking status of ha-080809-m04 ...
	I0531 18:48:02.049947   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:02.049986   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:02.065116   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42505
	I0531 18:48:02.065488   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:02.065953   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:48:02.065979   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:02.066292   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:02.066465   38145 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:48:02.067955   38145 status.go:330] ha-080809-m04 host status = "Running" (err=<nil>)
	I0531 18:48:02.067969   38145 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:02.068316   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:02.068378   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:02.083531   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40525
	I0531 18:48:02.083949   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:02.084344   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:48:02.084364   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:02.084631   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:02.084749   38145 main.go:141] libmachine: (ha-080809-m04) Calling .GetIP
	I0531 18:48:02.087173   38145 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:02.087551   38145 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:02.087590   38145 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:02.087721   38145 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:02.087987   38145 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:02.088024   38145 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:02.102731   38145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0531 18:48:02.103203   38145 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:02.103659   38145 main.go:141] libmachine: Using API Version  1
	I0531 18:48:02.103679   38145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:02.103978   38145 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:02.104149   38145 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:48:02.104345   38145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:02.104376   38145 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:48:02.107544   38145 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:02.107941   38145 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:02.107971   38145 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:02.108138   38145 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:48:02.108310   38145 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:48:02.108476   38145 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:48:02.108646   38145 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:48:02.189184   38145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:02.207206   38145 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 3 (4.895460731s)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:48:03.505426   38263 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:48:03.505522   38263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:03.505529   38263 out.go:304] Setting ErrFile to fd 2...
	I0531 18:48:03.505533   38263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:03.505701   38263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:48:03.505839   38263 out.go:298] Setting JSON to false
	I0531 18:48:03.505858   38263 mustload.go:65] Loading cluster: ha-080809
	I0531 18:48:03.505983   38263 notify.go:220] Checking for updates...
	I0531 18:48:03.506213   38263 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:48:03.506232   38263 status.go:255] checking status of ha-080809 ...
	I0531 18:48:03.506614   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:03.506667   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:03.521940   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44713
	I0531 18:48:03.522383   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:03.522930   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:03.522948   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:03.523332   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:03.523512   38263 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:48:03.525158   38263 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 18:48:03.525183   38263 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:48:03.525512   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:03.525552   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:03.539808   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32781
	I0531 18:48:03.540217   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:03.540619   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:03.540644   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:03.540893   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:03.541180   38263 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:48:03.543874   38263 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:03.544271   38263 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:48:03.544304   38263 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:03.544411   38263 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:48:03.544677   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:03.544706   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:03.559287   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I0531 18:48:03.559681   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:03.560125   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:03.560144   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:03.560433   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:03.560604   38263 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:48:03.560793   38263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:03.560814   38263 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:48:03.563289   38263 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:03.563675   38263 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:48:03.563703   38263 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:03.563812   38263 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:48:03.563939   38263 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:48:03.564095   38263 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:48:03.564193   38263 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:48:03.653343   38263 ssh_runner.go:195] Run: systemctl --version
	I0531 18:48:03.660365   38263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:03.677174   38263 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:03.677215   38263 api_server.go:166] Checking apiserver status ...
	I0531 18:48:03.677262   38263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:03.693920   38263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0531 18:48:03.703264   38263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:03.703321   38263 ssh_runner.go:195] Run: ls
	I0531 18:48:03.707515   38263 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:03.713398   38263 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:03.713415   38263 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 18:48:03.713424   38263 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:03.713437   38263 status.go:255] checking status of ha-080809-m02 ...
	I0531 18:48:03.713701   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:03.713731   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:03.728188   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46163
	I0531 18:48:03.728533   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:03.728990   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:03.729012   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:03.729304   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:03.729473   38263 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:48:03.730927   38263 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 18:48:03.730940   38263 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:48:03.731230   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:03.731259   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:03.745176   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46725
	I0531 18:48:03.745520   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:03.745908   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:03.745927   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:03.746221   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:03.746390   38263 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:48:03.748952   38263 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:03.749320   38263 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:48:03.749334   38263 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:03.749489   38263 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:48:03.749761   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:03.749793   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:03.763883   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36343
	I0531 18:48:03.764213   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:03.764688   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:03.764709   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:03.765010   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:03.765187   38263 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:48:03.765354   38263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:03.765386   38263 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:48:03.767624   38263 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:03.767928   38263 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:48:03.767959   38263 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:03.768050   38263 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:48:03.768234   38263 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:48:03.768416   38263 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:48:03.768558   38263 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	W0531 18:48:04.925287   38263 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:04.925328   38263 retry.go:31] will retry after 187.158533ms: dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:48:08.001314   38263 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:48:08.001405   38263 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0531 18:48:08.001423   38263 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:08.001431   38263 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0531 18:48:08.001467   38263 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:08.001474   38263 status.go:255] checking status of ha-080809-m03 ...
	I0531 18:48:08.001754   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:08.001811   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:08.016340   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41123
	I0531 18:48:08.016746   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:08.017198   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:08.017218   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:08.017520   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:08.017698   38263 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:48:08.019183   38263 status.go:330] ha-080809-m03 host status = "Running" (err=<nil>)
	I0531 18:48:08.019196   38263 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:08.019601   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:08.019650   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:08.034106   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40561
	I0531 18:48:08.034566   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:08.035005   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:08.035024   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:08.035258   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:08.035371   38263 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:48:08.038128   38263 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:08.038498   38263 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:08.038523   38263 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:08.038668   38263 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:08.039012   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:08.039046   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:08.054004   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45019
	I0531 18:48:08.054312   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:08.054748   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:08.054774   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:08.055078   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:08.055281   38263 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:48:08.055447   38263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:08.055463   38263 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:48:08.057796   38263 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:08.058142   38263 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:08.058186   38263 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:08.058297   38263 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:48:08.058456   38263 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:48:08.058615   38263 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:48:08.058748   38263 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:48:08.150126   38263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:08.168723   38263 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:08.168744   38263 api_server.go:166] Checking apiserver status ...
	I0531 18:48:08.168777   38263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:08.184222   38263 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0531 18:48:08.194107   38263 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:08.194157   38263 ssh_runner.go:195] Run: ls
	I0531 18:48:08.199196   38263 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:08.203281   38263 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:08.203302   38263 status.go:422] ha-080809-m03 apiserver status = Running (err=<nil>)
	I0531 18:48:08.203309   38263 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:08.203322   38263 status.go:255] checking status of ha-080809-m04 ...
	I0531 18:48:08.203598   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:08.203628   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:08.218890   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0531 18:48:08.219328   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:08.219724   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:08.219742   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:08.220064   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:08.220252   38263 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:48:08.221824   38263 status.go:330] ha-080809-m04 host status = "Running" (err=<nil>)
	I0531 18:48:08.221837   38263 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:08.222167   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:08.222207   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:08.236618   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41301
	I0531 18:48:08.237049   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:08.237464   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:08.237481   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:08.237788   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:08.237938   38263 main.go:141] libmachine: (ha-080809-m04) Calling .GetIP
	I0531 18:48:08.240752   38263 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:08.241229   38263 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:08.241260   38263 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:08.241438   38263 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:08.241844   38263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:08.241888   38263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:08.257960   38263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0531 18:48:08.258392   38263 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:08.258928   38263 main.go:141] libmachine: Using API Version  1
	I0531 18:48:08.258956   38263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:08.259275   38263 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:08.259468   38263 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:48:08.259642   38263 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:08.259664   38263 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:48:08.262661   38263 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:08.263056   38263 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:08.263076   38263 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:08.263206   38263 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:48:08.263385   38263 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:48:08.263558   38263 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:48:08.263673   38263 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:48:08.340620   38263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:08.358037   38263 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 3 (3.694587023s)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:48:11.510675   38363 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:48:11.510777   38363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:11.510788   38363 out.go:304] Setting ErrFile to fd 2...
	I0531 18:48:11.510793   38363 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:11.511393   38363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:48:11.511693   38363 out.go:298] Setting JSON to false
	I0531 18:48:11.511720   38363 mustload.go:65] Loading cluster: ha-080809
	I0531 18:48:11.511917   38363 notify.go:220] Checking for updates...
	I0531 18:48:11.512507   38363 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:48:11.512527   38363 status.go:255] checking status of ha-080809 ...
	I0531 18:48:11.512887   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:11.512950   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:11.530058   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36335
	I0531 18:48:11.530437   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:11.530976   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:11.531002   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:11.531310   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:11.531509   38363 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:48:11.533069   38363 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 18:48:11.533087   38363 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:48:11.533418   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:11.533450   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:11.548707   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45215
	I0531 18:48:11.549118   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:11.549549   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:11.549575   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:11.549861   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:11.550031   38363 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:48:11.552589   38363 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:11.553092   38363 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:48:11.553126   38363 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:11.553247   38363 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:48:11.553513   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:11.553545   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:11.567399   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37489
	I0531 18:48:11.567764   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:11.568144   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:11.568164   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:11.568457   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:11.568644   38363 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:48:11.568849   38363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:11.568876   38363 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:48:11.571456   38363 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:11.571906   38363 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:48:11.571939   38363 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:11.572125   38363 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:48:11.572292   38363 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:48:11.572432   38363 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:48:11.572592   38363 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:48:11.652817   38363 ssh_runner.go:195] Run: systemctl --version
	I0531 18:48:11.658980   38363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:11.673066   38363 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:11.673102   38363 api_server.go:166] Checking apiserver status ...
	I0531 18:48:11.673136   38363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:11.687363   38363 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0531 18:48:11.696525   38363 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:11.696563   38363 ssh_runner.go:195] Run: ls
	I0531 18:48:11.701365   38363 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:11.705507   38363 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:11.705524   38363 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 18:48:11.705533   38363 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:11.705547   38363 status.go:255] checking status of ha-080809-m02 ...
	I0531 18:48:11.705813   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:11.705840   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:11.720237   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36087
	I0531 18:48:11.720619   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:11.721056   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:11.721076   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:11.721392   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:11.721646   38363 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:48:11.723133   38363 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 18:48:11.723149   38363 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:48:11.723420   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:11.723448   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:11.738993   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0531 18:48:11.739324   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:11.739782   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:11.739804   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:11.740115   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:11.740284   38363 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:48:11.742889   38363 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:11.743298   38363 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:48:11.743321   38363 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:11.743428   38363 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:48:11.743680   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:11.743712   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:11.757544   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43703
	I0531 18:48:11.757863   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:11.758318   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:11.758340   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:11.758648   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:11.758847   38363 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:48:11.759067   38363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:11.759091   38363 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:48:11.761847   38363 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:11.762348   38363 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:48:11.762383   38363 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:11.762524   38363 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:48:11.762679   38363 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:48:11.762811   38363 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:48:11.762935   38363 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	W0531 18:48:14.813233   38363 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:48:14.813405   38363 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0531 18:48:14.813444   38363 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:14.813459   38363 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0531 18:48:14.813481   38363 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:14.813505   38363 status.go:255] checking status of ha-080809-m03 ...
	I0531 18:48:14.813838   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:14.813893   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:14.828446   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38361
	I0531 18:48:14.828905   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:14.829414   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:14.829436   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:14.829722   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:14.829904   38363 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:48:14.831443   38363 status.go:330] ha-080809-m03 host status = "Running" (err=<nil>)
	I0531 18:48:14.831456   38363 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:14.831826   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:14.831879   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:14.846432   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0531 18:48:14.846815   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:14.847206   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:14.847226   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:14.847538   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:14.847723   38363 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:48:14.850706   38363 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:14.851142   38363 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:14.851169   38363 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:14.851288   38363 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:14.851739   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:14.851786   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:14.866434   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42343
	I0531 18:48:14.866874   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:14.867317   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:14.867345   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:14.867648   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:14.867827   38363 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:48:14.868049   38363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:14.868072   38363 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:48:14.870752   38363 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:14.871165   38363 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:14.871199   38363 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:14.871337   38363 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:48:14.871508   38363 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:48:14.871654   38363 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:48:14.872096   38363 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:48:14.956783   38363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:14.972848   38363 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:14.972883   38363 api_server.go:166] Checking apiserver status ...
	I0531 18:48:14.972933   38363 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:14.988022   38363 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0531 18:48:14.998230   38363 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:14.998324   38363 ssh_runner.go:195] Run: ls
	I0531 18:48:15.002962   38363 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:15.008833   38363 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:15.008854   38363 status.go:422] ha-080809-m03 apiserver status = Running (err=<nil>)
	I0531 18:48:15.008864   38363 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:15.008891   38363 status.go:255] checking status of ha-080809-m04 ...
	I0531 18:48:15.009206   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:15.009248   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:15.024351   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
	I0531 18:48:15.024804   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:15.025310   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:15.025331   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:15.025628   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:15.025794   38363 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:48:15.027375   38363 status.go:330] ha-080809-m04 host status = "Running" (err=<nil>)
	I0531 18:48:15.027390   38363 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:15.027693   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:15.027725   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:15.042691   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45515
	I0531 18:48:15.043113   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:15.043553   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:15.043576   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:15.043937   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:15.044157   38363 main.go:141] libmachine: (ha-080809-m04) Calling .GetIP
	I0531 18:48:15.046916   38363 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:15.047359   38363 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:15.047391   38363 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:15.047532   38363 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:15.047810   38363 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:15.047841   38363 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:15.062474   38363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0531 18:48:15.062938   38363 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:15.063403   38363 main.go:141] libmachine: Using API Version  1
	I0531 18:48:15.063426   38363 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:15.063724   38363 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:15.063888   38363 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:48:15.064058   38363 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:15.064081   38363 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:48:15.066807   38363 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:15.067205   38363 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:15.067230   38363 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:15.067369   38363 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:48:15.067526   38363 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:48:15.067665   38363 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:48:15.067773   38363 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:48:15.148694   38363 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:15.163424   38363 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 3 (3.708768665s)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:48:21.189684   38483 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:48:21.189941   38483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:21.189958   38483 out.go:304] Setting ErrFile to fd 2...
	I0531 18:48:21.189964   38483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:21.190119   38483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:48:21.190272   38483 out.go:298] Setting JSON to false
	I0531 18:48:21.190294   38483 mustload.go:65] Loading cluster: ha-080809
	I0531 18:48:21.190340   38483 notify.go:220] Checking for updates...
	I0531 18:48:21.190818   38483 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:48:21.190838   38483 status.go:255] checking status of ha-080809 ...
	I0531 18:48:21.191258   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:21.191329   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:21.209592   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40855
	I0531 18:48:21.209988   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:21.210470   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:21.210494   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:21.210854   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:21.211055   38483 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:48:21.212679   38483 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 18:48:21.212695   38483 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:48:21.213064   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:21.213107   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:21.227272   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36615
	I0531 18:48:21.227644   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:21.228065   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:21.228087   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:21.228381   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:21.228567   38483 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:48:21.231052   38483 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:21.231450   38483 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:48:21.231468   38483 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:21.231571   38483 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:48:21.231856   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:21.231920   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:21.245543   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36023
	I0531 18:48:21.245881   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:21.246280   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:21.246299   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:21.246616   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:21.246809   38483 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:48:21.246987   38483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:21.247015   38483 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:48:21.249628   38483 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:21.250046   38483 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:48:21.250074   38483 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:21.250229   38483 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:48:21.250388   38483 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:48:21.250525   38483 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:48:21.250619   38483 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:48:21.332276   38483 ssh_runner.go:195] Run: systemctl --version
	I0531 18:48:21.339487   38483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:21.354886   38483 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:21.354919   38483 api_server.go:166] Checking apiserver status ...
	I0531 18:48:21.354945   38483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:21.368377   38483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0531 18:48:21.378252   38483 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:21.378288   38483 ssh_runner.go:195] Run: ls
	I0531 18:48:21.382573   38483 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:21.389330   38483 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:21.389353   38483 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 18:48:21.389366   38483 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:21.389396   38483 status.go:255] checking status of ha-080809-m02 ...
	I0531 18:48:21.389671   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:21.389703   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:21.404418   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46485
	I0531 18:48:21.404822   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:21.405336   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:21.405363   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:21.405606   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:21.405782   38483 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:48:21.407317   38483 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 18:48:21.407334   38483 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:48:21.407595   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:21.407628   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:21.422006   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I0531 18:48:21.422353   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:21.422795   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:21.422821   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:21.423203   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:21.423405   38483 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:48:21.426234   38483 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:21.426728   38483 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:48:21.426765   38483 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:21.426876   38483 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 18:48:21.427169   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:21.427225   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:21.441826   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46291
	I0531 18:48:21.442145   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:21.442557   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:21.442574   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:21.442893   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:21.443060   38483 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:48:21.443239   38483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:21.443256   38483 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:48:21.445790   38483 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:21.446203   38483 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:48:21.446225   38483 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:48:21.446374   38483 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:48:21.446529   38483 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:48:21.446678   38483 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:48:21.446817   38483 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	W0531 18:48:24.509230   38483 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0531 18:48:24.509347   38483 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0531 18:48:24.509370   38483 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:24.509383   38483 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0531 18:48:24.509408   38483 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 18:48:24.509418   38483 status.go:255] checking status of ha-080809-m03 ...
	I0531 18:48:24.509767   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:24.509809   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:24.525600   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45679
	I0531 18:48:24.526043   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:24.526591   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:24.526625   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:24.526962   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:24.527163   38483 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:48:24.528738   38483 status.go:330] ha-080809-m03 host status = "Running" (err=<nil>)
	I0531 18:48:24.528755   38483 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:24.529184   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:24.529226   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:24.542940   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44283
	I0531 18:48:24.543375   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:24.543828   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:24.543851   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:24.544113   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:24.544288   38483 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:48:24.546690   38483 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:24.547154   38483 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:24.547182   38483 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:24.547350   38483 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:24.547701   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:24.547744   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:24.561461   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45045
	I0531 18:48:24.561790   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:24.562274   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:24.562293   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:24.562563   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:24.562741   38483 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:48:24.562899   38483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:24.562926   38483 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:48:24.565390   38483 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:24.565787   38483 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:24.565815   38483 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:24.565965   38483 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:48:24.566129   38483 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:48:24.566300   38483 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:48:24.566413   38483 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:48:24.652306   38483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:24.668371   38483 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:24.668399   38483 api_server.go:166] Checking apiserver status ...
	I0531 18:48:24.668439   38483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:24.682444   38483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0531 18:48:24.692778   38483 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:24.692818   38483 ssh_runner.go:195] Run: ls
	I0531 18:48:24.697060   38483 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:24.706034   38483 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:24.706051   38483 status.go:422] ha-080809-m03 apiserver status = Running (err=<nil>)
	I0531 18:48:24.706058   38483 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:24.706071   38483 status.go:255] checking status of ha-080809-m04 ...
	I0531 18:48:24.706324   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:24.706366   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:24.721826   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0531 18:48:24.722235   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:24.722697   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:24.722716   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:24.723021   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:24.723207   38483 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:48:24.724750   38483 status.go:330] ha-080809-m04 host status = "Running" (err=<nil>)
	I0531 18:48:24.724766   38483 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:24.725095   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:24.725155   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:24.739490   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41635
	I0531 18:48:24.739856   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:24.740268   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:24.740285   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:24.740576   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:24.740757   38483 main.go:141] libmachine: (ha-080809-m04) Calling .GetIP
	I0531 18:48:24.743241   38483 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:24.743562   38483 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:24.743601   38483 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:24.743854   38483 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:24.744142   38483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:24.744174   38483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:24.758858   38483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46375
	I0531 18:48:24.759180   38483 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:24.759571   38483 main.go:141] libmachine: Using API Version  1
	I0531 18:48:24.759592   38483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:24.759869   38483 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:24.760039   38483 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:48:24.760216   38483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:24.760237   38483 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:48:24.762646   38483 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:24.763019   38483 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:24.763045   38483 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:24.763165   38483 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:48:24.763310   38483 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:48:24.763482   38483 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:48:24.763603   38483 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:48:24.840312   38483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:24.856186   38483 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 7 (634.527264ms)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080809-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:48:35.437553   38634 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:48:35.437697   38634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:35.437707   38634 out.go:304] Setting ErrFile to fd 2...
	I0531 18:48:35.437711   38634 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:35.437911   38634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:48:35.438122   38634 out.go:298] Setting JSON to false
	I0531 18:48:35.438147   38634 mustload.go:65] Loading cluster: ha-080809
	I0531 18:48:35.438291   38634 notify.go:220] Checking for updates...
	I0531 18:48:35.438546   38634 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:48:35.438561   38634 status.go:255] checking status of ha-080809 ...
	I0531 18:48:35.439010   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.439063   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.458534   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37883
	I0531 18:48:35.459070   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.459634   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.459656   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.460115   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.460312   38634 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:48:35.462042   38634 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 18:48:35.462057   38634 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:48:35.462324   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.462354   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.476909   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45585
	I0531 18:48:35.477348   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.477836   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.477857   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.478212   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.478377   38634 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:48:35.481194   38634 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:35.481634   38634 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:48:35.481659   38634 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:35.481766   38634 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:48:35.482043   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.482077   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.496721   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34879
	I0531 18:48:35.497106   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.497561   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.497587   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.497928   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.498117   38634 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:48:35.498321   38634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:35.498349   38634 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:48:35.501197   38634 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:35.501615   38634 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:48:35.501642   38634 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:48:35.501766   38634 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:48:35.501918   38634 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:48:35.502061   38634 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:48:35.502189   38634 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:48:35.588490   38634 ssh_runner.go:195] Run: systemctl --version
	I0531 18:48:35.594592   38634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:35.610866   38634 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:35.610894   38634 api_server.go:166] Checking apiserver status ...
	I0531 18:48:35.610920   38634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:35.631367   38634 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup
	W0531 18:48:35.642147   38634 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:35.642198   38634 ssh_runner.go:195] Run: ls
	I0531 18:48:35.647205   38634 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:35.656003   38634 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:35.656024   38634 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 18:48:35.656041   38634 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:35.656055   38634 status.go:255] checking status of ha-080809-m02 ...
	I0531 18:48:35.656397   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.656437   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.672137   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0531 18:48:35.672543   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.673106   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.673133   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.673486   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.673705   38634 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:48:35.675514   38634 status.go:330] ha-080809-m02 host status = "Stopped" (err=<nil>)
	I0531 18:48:35.675530   38634 status.go:343] host is not running, skipping remaining checks
	I0531 18:48:35.675537   38634 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:35.675559   38634 status.go:255] checking status of ha-080809-m03 ...
	I0531 18:48:35.675885   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.675935   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.690899   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34771
	I0531 18:48:35.691381   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.691871   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.691892   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.692204   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.692398   38634 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:48:35.694118   38634 status.go:330] ha-080809-m03 host status = "Running" (err=<nil>)
	I0531 18:48:35.694135   38634 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:35.694432   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.694475   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.708592   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37105
	I0531 18:48:35.708994   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.709441   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.709475   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.709817   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.710059   38634 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:48:35.712678   38634 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:35.713132   38634 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:35.713159   38634 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:35.713305   38634 host.go:66] Checking if "ha-080809-m03" exists ...
	I0531 18:48:35.713586   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.713617   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.729360   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41853
	I0531 18:48:35.729794   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.730309   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.730334   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.730646   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.730856   38634 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:48:35.731040   38634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:35.731064   38634 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:48:35.733579   38634 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:35.733946   38634 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:35.733985   38634 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:35.734156   38634 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:48:35.734309   38634 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:48:35.734464   38634 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:48:35.734587   38634 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:48:35.821569   38634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:35.839579   38634 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 18:48:35.839612   38634 api_server.go:166] Checking apiserver status ...
	I0531 18:48:35.839648   38634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:48:35.856920   38634 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup
	W0531 18:48:35.869179   38634 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1533/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 18:48:35.869232   38634 ssh_runner.go:195] Run: ls
	I0531 18:48:35.875053   38634 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 18:48:35.879331   38634 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 18:48:35.879357   38634 status.go:422] ha-080809-m03 apiserver status = Running (err=<nil>)
	I0531 18:48:35.879368   38634 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 18:48:35.879387   38634 status.go:255] checking status of ha-080809-m04 ...
	I0531 18:48:35.879757   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.879799   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.896603   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0531 18:48:35.897032   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.897549   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.897571   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.897888   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.898081   38634 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:48:35.899694   38634 status.go:330] ha-080809-m04 host status = "Running" (err=<nil>)
	I0531 18:48:35.899711   38634 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:35.900081   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.900123   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.914410   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45869
	I0531 18:48:35.914848   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.915323   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.915343   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.915695   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.915875   38634 main.go:141] libmachine: (ha-080809-m04) Calling .GetIP
	I0531 18:48:35.918317   38634 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:35.918752   38634 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:35.918776   38634 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:35.918859   38634 host.go:66] Checking if "ha-080809-m04" exists ...
	I0531 18:48:35.919157   38634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:35.919201   38634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:35.933255   38634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I0531 18:48:35.933617   38634 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:35.934049   38634 main.go:141] libmachine: Using API Version  1
	I0531 18:48:35.934075   38634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:35.934378   38634 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:35.934552   38634 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:48:35.934735   38634 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 18:48:35.934764   38634 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:48:35.936988   38634 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:35.937362   38634 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:35.937388   38634 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:35.937466   38634 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:48:35.937629   38634 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:48:35.937762   38634 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:48:35.937898   38634 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:48:36.016246   38634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:48:36.031429   38634 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-080809 -n ha-080809
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-080809 logs -n 25: (1.45103381s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809:/home/docker/cp-test_ha-080809-m03_ha-080809.txt                       |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809 sudo cat                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809.txt                                 |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m04 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp testdata/cp-test.txt                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809:/home/docker/cp-test_ha-080809-m04_ha-080809.txt                       |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809 sudo cat                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809.txt                                 |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03:/home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m03 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-080809 node stop m02 -v=7                                                     | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-080809 node start m02 -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 18:40:49
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:40:49.320932   33166 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:40:49.321060   33166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:40:49.321070   33166 out.go:304] Setting ErrFile to fd 2...
	I0531 18:40:49.321076   33166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:40:49.321266   33166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:40:49.321800   33166 out.go:298] Setting JSON to false
	I0531 18:40:49.322588   33166 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4995,"bootTime":1717175854,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:40:49.322636   33166 start.go:139] virtualization: kvm guest
	I0531 18:40:49.324727   33166 out.go:177] * [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:40:49.326253   33166 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 18:40:49.327340   33166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:40:49.326246   33166 notify.go:220] Checking for updates...
	I0531 18:40:49.329352   33166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:40:49.330514   33166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:40:49.331659   33166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:40:49.332801   33166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:40:49.334056   33166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 18:40:49.366038   33166 out.go:177] * Using the kvm2 driver based on user configuration
	I0531 18:40:49.367249   33166 start.go:297] selected driver: kvm2
	I0531 18:40:49.367264   33166 start.go:901] validating driver "kvm2" against <nil>
	I0531 18:40:49.367273   33166 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:40:49.367920   33166 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:40:49.368003   33166 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 18:40:49.381822   33166 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 18:40:49.381856   33166 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0531 18:40:49.382087   33166 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:40:49.382131   33166 cni.go:84] Creating CNI manager for ""
	I0531 18:40:49.382142   33166 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0531 18:40:49.382146   33166 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0531 18:40:49.382194   33166 start.go:340] cluster config:
	{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0531 18:40:49.382282   33166 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:40:49.383818   33166 out.go:177] * Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	I0531 18:40:49.384909   33166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:40:49.384931   33166 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 18:40:49.384940   33166 cache.go:56] Caching tarball of preloaded images
	I0531 18:40:49.385046   33166 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:40:49.385058   33166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:40:49.385316   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:40:49.385333   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json: {Name:mkb41d6e60f69eefd53f74ad11a08ceecb99e6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:40:49.385453   33166 start.go:360] acquireMachinesLock for ha-080809: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:40:49.385485   33166 start.go:364] duration metric: took 20.431µs to acquireMachinesLock for "ha-080809"
	I0531 18:40:49.385504   33166 start.go:93] Provisioning new machine with config: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:40:49.385560   33166 start.go:125] createHost starting for "" (driver="kvm2")
	I0531 18:40:49.386980   33166 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0531 18:40:49.387078   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:40:49.387110   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:40:49.400415   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0531 18:40:49.400770   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:40:49.401282   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:40:49.401301   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:40:49.401661   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:40:49.401851   33166 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:40:49.402017   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:40:49.402162   33166 start.go:159] libmachine.API.Create for "ha-080809" (driver="kvm2")
	I0531 18:40:49.402194   33166 client.go:168] LocalClient.Create starting
	I0531 18:40:49.402246   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 18:40:49.402284   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:40:49.402308   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:40:49.402380   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 18:40:49.402406   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:40:49.402426   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:40:49.402452   33166 main.go:141] libmachine: Running pre-create checks...
	I0531 18:40:49.402473   33166 main.go:141] libmachine: (ha-080809) Calling .PreCreateCheck
	I0531 18:40:49.402806   33166 main.go:141] libmachine: (ha-080809) Calling .GetConfigRaw
	I0531 18:40:49.403148   33166 main.go:141] libmachine: Creating machine...
	I0531 18:40:49.403161   33166 main.go:141] libmachine: (ha-080809) Calling .Create
	I0531 18:40:49.403288   33166 main.go:141] libmachine: (ha-080809) Creating KVM machine...
	I0531 18:40:49.404469   33166 main.go:141] libmachine: (ha-080809) DBG | found existing default KVM network
	I0531 18:40:49.405113   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.404947   33189 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ac0}
	I0531 18:40:49.405128   33166 main.go:141] libmachine: (ha-080809) DBG | created network xml: 
	I0531 18:40:49.405138   33166 main.go:141] libmachine: (ha-080809) DBG | <network>
	I0531 18:40:49.405142   33166 main.go:141] libmachine: (ha-080809) DBG |   <name>mk-ha-080809</name>
	I0531 18:40:49.405148   33166 main.go:141] libmachine: (ha-080809) DBG |   <dns enable='no'/>
	I0531 18:40:49.405157   33166 main.go:141] libmachine: (ha-080809) DBG |   
	I0531 18:40:49.405170   33166 main.go:141] libmachine: (ha-080809) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0531 18:40:49.405180   33166 main.go:141] libmachine: (ha-080809) DBG |     <dhcp>
	I0531 18:40:49.405190   33166 main.go:141] libmachine: (ha-080809) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0531 18:40:49.405203   33166 main.go:141] libmachine: (ha-080809) DBG |     </dhcp>
	I0531 18:40:49.405211   33166 main.go:141] libmachine: (ha-080809) DBG |   </ip>
	I0531 18:40:49.405215   33166 main.go:141] libmachine: (ha-080809) DBG |   
	I0531 18:40:49.405221   33166 main.go:141] libmachine: (ha-080809) DBG | </network>
	I0531 18:40:49.405227   33166 main.go:141] libmachine: (ha-080809) DBG | 
	I0531 18:40:49.409855   33166 main.go:141] libmachine: (ha-080809) DBG | trying to create private KVM network mk-ha-080809 192.168.39.0/24...
	I0531 18:40:49.470492   33166 main.go:141] libmachine: (ha-080809) DBG | private KVM network mk-ha-080809 192.168.39.0/24 created
	I0531 18:40:49.470564   33166 main.go:141] libmachine: (ha-080809) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809 ...
	I0531 18:40:49.470582   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.470452   33189 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:40:49.470603   33166 main.go:141] libmachine: (ha-080809) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 18:40:49.470635   33166 main.go:141] libmachine: (ha-080809) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 18:40:49.690168   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.690033   33189 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa...
	I0531 18:40:49.851454   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.851328   33189 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/ha-080809.rawdisk...
	I0531 18:40:49.851480   33166 main.go:141] libmachine: (ha-080809) DBG | Writing magic tar header
	I0531 18:40:49.851490   33166 main.go:141] libmachine: (ha-080809) DBG | Writing SSH key tar header
	I0531 18:40:49.851497   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:49.851445   33189 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809 ...
	I0531 18:40:49.851582   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809
	I0531 18:40:49.851605   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809 (perms=drwx------)
	I0531 18:40:49.851612   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 18:40:49.851619   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 18:40:49.851629   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 18:40:49.851635   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 18:40:49.851643   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 18:40:49.851650   33166 main.go:141] libmachine: (ha-080809) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 18:40:49.851656   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:40:49.851663   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 18:40:49.851669   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 18:40:49.851675   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home/jenkins
	I0531 18:40:49.851681   33166 main.go:141] libmachine: (ha-080809) DBG | Checking permissions on dir: /home
	I0531 18:40:49.851686   33166 main.go:141] libmachine: (ha-080809) Creating domain...
	I0531 18:40:49.851695   33166 main.go:141] libmachine: (ha-080809) DBG | Skipping /home - not owner
	I0531 18:40:49.852691   33166 main.go:141] libmachine: (ha-080809) define libvirt domain using xml: 
	I0531 18:40:49.852718   33166 main.go:141] libmachine: (ha-080809) <domain type='kvm'>
	I0531 18:40:49.852728   33166 main.go:141] libmachine: (ha-080809)   <name>ha-080809</name>
	I0531 18:40:49.852740   33166 main.go:141] libmachine: (ha-080809)   <memory unit='MiB'>2200</memory>
	I0531 18:40:49.852749   33166 main.go:141] libmachine: (ha-080809)   <vcpu>2</vcpu>
	I0531 18:40:49.852759   33166 main.go:141] libmachine: (ha-080809)   <features>
	I0531 18:40:49.852766   33166 main.go:141] libmachine: (ha-080809)     <acpi/>
	I0531 18:40:49.852773   33166 main.go:141] libmachine: (ha-080809)     <apic/>
	I0531 18:40:49.852778   33166 main.go:141] libmachine: (ha-080809)     <pae/>
	I0531 18:40:49.852788   33166 main.go:141] libmachine: (ha-080809)     
	I0531 18:40:49.852795   33166 main.go:141] libmachine: (ha-080809)   </features>
	I0531 18:40:49.852800   33166 main.go:141] libmachine: (ha-080809)   <cpu mode='host-passthrough'>
	I0531 18:40:49.852808   33166 main.go:141] libmachine: (ha-080809)   
	I0531 18:40:49.852816   33166 main.go:141] libmachine: (ha-080809)   </cpu>
	I0531 18:40:49.852856   33166 main.go:141] libmachine: (ha-080809)   <os>
	I0531 18:40:49.852879   33166 main.go:141] libmachine: (ha-080809)     <type>hvm</type>
	I0531 18:40:49.852890   33166 main.go:141] libmachine: (ha-080809)     <boot dev='cdrom'/>
	I0531 18:40:49.852900   33166 main.go:141] libmachine: (ha-080809)     <boot dev='hd'/>
	I0531 18:40:49.852911   33166 main.go:141] libmachine: (ha-080809)     <bootmenu enable='no'/>
	I0531 18:40:49.852918   33166 main.go:141] libmachine: (ha-080809)   </os>
	I0531 18:40:49.852923   33166 main.go:141] libmachine: (ha-080809)   <devices>
	I0531 18:40:49.852930   33166 main.go:141] libmachine: (ha-080809)     <disk type='file' device='cdrom'>
	I0531 18:40:49.852943   33166 main.go:141] libmachine: (ha-080809)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/boot2docker.iso'/>
	I0531 18:40:49.852986   33166 main.go:141] libmachine: (ha-080809)       <target dev='hdc' bus='scsi'/>
	I0531 18:40:49.853001   33166 main.go:141] libmachine: (ha-080809)       <readonly/>
	I0531 18:40:49.853009   33166 main.go:141] libmachine: (ha-080809)     </disk>
	I0531 18:40:49.853015   33166 main.go:141] libmachine: (ha-080809)     <disk type='file' device='disk'>
	I0531 18:40:49.853026   33166 main.go:141] libmachine: (ha-080809)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 18:40:49.853037   33166 main.go:141] libmachine: (ha-080809)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/ha-080809.rawdisk'/>
	I0531 18:40:49.853053   33166 main.go:141] libmachine: (ha-080809)       <target dev='hda' bus='virtio'/>
	I0531 18:40:49.853063   33166 main.go:141] libmachine: (ha-080809)     </disk>
	I0531 18:40:49.853072   33166 main.go:141] libmachine: (ha-080809)     <interface type='network'>
	I0531 18:40:49.853089   33166 main.go:141] libmachine: (ha-080809)       <source network='mk-ha-080809'/>
	I0531 18:40:49.853101   33166 main.go:141] libmachine: (ha-080809)       <model type='virtio'/>
	I0531 18:40:49.853111   33166 main.go:141] libmachine: (ha-080809)     </interface>
	I0531 18:40:49.853121   33166 main.go:141] libmachine: (ha-080809)     <interface type='network'>
	I0531 18:40:49.853133   33166 main.go:141] libmachine: (ha-080809)       <source network='default'/>
	I0531 18:40:49.853150   33166 main.go:141] libmachine: (ha-080809)       <model type='virtio'/>
	I0531 18:40:49.853170   33166 main.go:141] libmachine: (ha-080809)     </interface>
	I0531 18:40:49.853202   33166 main.go:141] libmachine: (ha-080809)     <serial type='pty'>
	I0531 18:40:49.853215   33166 main.go:141] libmachine: (ha-080809)       <target port='0'/>
	I0531 18:40:49.853221   33166 main.go:141] libmachine: (ha-080809)     </serial>
	I0531 18:40:49.853225   33166 main.go:141] libmachine: (ha-080809)     <console type='pty'>
	I0531 18:40:49.853234   33166 main.go:141] libmachine: (ha-080809)       <target type='serial' port='0'/>
	I0531 18:40:49.853241   33166 main.go:141] libmachine: (ha-080809)     </console>
	I0531 18:40:49.853249   33166 main.go:141] libmachine: (ha-080809)     <rng model='virtio'>
	I0531 18:40:49.853253   33166 main.go:141] libmachine: (ha-080809)       <backend model='random'>/dev/random</backend>
	I0531 18:40:49.853261   33166 main.go:141] libmachine: (ha-080809)     </rng>
	I0531 18:40:49.853264   33166 main.go:141] libmachine: (ha-080809)     
	I0531 18:40:49.853271   33166 main.go:141] libmachine: (ha-080809)     
	I0531 18:40:49.853275   33166 main.go:141] libmachine: (ha-080809)   </devices>
	I0531 18:40:49.853280   33166 main.go:141] libmachine: (ha-080809) </domain>
	I0531 18:40:49.853286   33166 main.go:141] libmachine: (ha-080809) 
	I0531 18:40:49.857875   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:d7:ce:f9 in network default
	I0531 18:40:49.859513   33166 main.go:141] libmachine: (ha-080809) Ensuring networks are active...
	I0531 18:40:49.859536   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:49.860192   33166 main.go:141] libmachine: (ha-080809) Ensuring network default is active
	I0531 18:40:49.860453   33166 main.go:141] libmachine: (ha-080809) Ensuring network mk-ha-080809 is active
	I0531 18:40:49.860931   33166 main.go:141] libmachine: (ha-080809) Getting domain xml...
	I0531 18:40:49.861594   33166 main.go:141] libmachine: (ha-080809) Creating domain...
	I0531 18:40:51.021353   33166 main.go:141] libmachine: (ha-080809) Waiting to get IP...
	I0531 18:40:51.022022   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:51.022429   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:51.022493   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:51.022423   33189 retry.go:31] will retry after 288.437015ms: waiting for machine to come up
	I0531 18:40:51.312848   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:51.313269   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:51.313303   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:51.313217   33189 retry.go:31] will retry after 272.356627ms: waiting for machine to come up
	I0531 18:40:51.587652   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:51.588060   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:51.588085   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:51.588022   33189 retry.go:31] will retry after 405.324063ms: waiting for machine to come up
	I0531 18:40:51.994525   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:51.995002   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:51.995030   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:51.994946   33189 retry.go:31] will retry after 448.884474ms: waiting for machine to come up
	I0531 18:40:52.445564   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:52.445963   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:52.445988   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:52.445932   33189 retry.go:31] will retry after 623.606026ms: waiting for machine to come up
	I0531 18:40:53.071239   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:53.071675   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:53.071696   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:53.071644   33189 retry.go:31] will retry after 831.335673ms: waiting for machine to come up
	I0531 18:40:53.904088   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:53.904491   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:53.904519   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:53.904464   33189 retry.go:31] will retry after 762.101756ms: waiting for machine to come up
	I0531 18:40:54.667921   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:54.668282   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:54.668310   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:54.668240   33189 retry.go:31] will retry after 1.06068241s: waiting for machine to come up
	I0531 18:40:55.730577   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:55.730948   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:55.730970   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:55.730927   33189 retry.go:31] will retry after 1.286483761s: waiting for machine to come up
	I0531 18:40:57.019251   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:57.019609   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:57.019635   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:57.019565   33189 retry.go:31] will retry after 1.611518766s: waiting for machine to come up
	I0531 18:40:58.633172   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:40:58.633575   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:40:58.633603   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:40:58.633520   33189 retry.go:31] will retry after 2.543412139s: waiting for machine to come up
	I0531 18:41:01.179059   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:01.179472   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:41:01.179520   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:41:01.179442   33189 retry.go:31] will retry after 3.022349633s: waiting for machine to come up
	I0531 18:41:04.202883   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:04.203239   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:41:04.203270   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:41:04.203205   33189 retry.go:31] will retry after 4.176601091s: waiting for machine to come up
	I0531 18:41:08.383778   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:08.384145   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find current IP address of domain ha-080809 in network mk-ha-080809
	I0531 18:41:08.384166   33166 main.go:141] libmachine: (ha-080809) DBG | I0531 18:41:08.384109   33189 retry.go:31] will retry after 3.418828121s: waiting for machine to come up
	I0531 18:41:11.805614   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:11.806024   33166 main.go:141] libmachine: (ha-080809) Found IP for machine: 192.168.39.2
	I0531 18:41:11.806044   33166 main.go:141] libmachine: (ha-080809) Reserving static IP address...
	I0531 18:41:11.806058   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has current primary IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:11.806394   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find host DHCP lease matching {name: "ha-080809", mac: "52:54:00:07:26:4d", ip: "192.168.39.2"} in network mk-ha-080809
	I0531 18:41:11.874853   33166 main.go:141] libmachine: (ha-080809) DBG | Getting to WaitForSSH function...
	I0531 18:41:11.874885   33166 main.go:141] libmachine: (ha-080809) Reserved static IP address: 192.168.39.2
	I0531 18:41:11.874898   33166 main.go:141] libmachine: (ha-080809) Waiting for SSH to be available...
	I0531 18:41:11.877644   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:11.878001   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809
	I0531 18:41:11.878025   33166 main.go:141] libmachine: (ha-080809) DBG | unable to find defined IP address of network mk-ha-080809 interface with MAC address 52:54:00:07:26:4d
	I0531 18:41:11.878180   33166 main.go:141] libmachine: (ha-080809) DBG | Using SSH client type: external
	I0531 18:41:11.878218   33166 main.go:141] libmachine: (ha-080809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa (-rw-------)
	I0531 18:41:11.878249   33166 main.go:141] libmachine: (ha-080809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:41:11.878259   33166 main.go:141] libmachine: (ha-080809) DBG | About to run SSH command:
	I0531 18:41:11.878267   33166 main.go:141] libmachine: (ha-080809) DBG | exit 0
	I0531 18:41:11.881625   33166 main.go:141] libmachine: (ha-080809) DBG | SSH cmd err, output: exit status 255: 
	I0531 18:41:11.881645   33166 main.go:141] libmachine: (ha-080809) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0531 18:41:11.881654   33166 main.go:141] libmachine: (ha-080809) DBG | command : exit 0
	I0531 18:41:11.881661   33166 main.go:141] libmachine: (ha-080809) DBG | err     : exit status 255
	I0531 18:41:11.881672   33166 main.go:141] libmachine: (ha-080809) DBG | output  : 
	I0531 18:41:14.882404   33166 main.go:141] libmachine: (ha-080809) DBG | Getting to WaitForSSH function...
	I0531 18:41:14.884819   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:14.885219   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:14.885251   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:14.885329   33166 main.go:141] libmachine: (ha-080809) DBG | Using SSH client type: external
	I0531 18:41:14.885350   33166 main.go:141] libmachine: (ha-080809) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa (-rw-------)
	I0531 18:41:14.885405   33166 main.go:141] libmachine: (ha-080809) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:41:14.885421   33166 main.go:141] libmachine: (ha-080809) DBG | About to run SSH command:
	I0531 18:41:14.885432   33166 main.go:141] libmachine: (ha-080809) DBG | exit 0
	I0531 18:41:15.008913   33166 main.go:141] libmachine: (ha-080809) DBG | SSH cmd err, output: <nil>: 
	I0531 18:41:15.009184   33166 main.go:141] libmachine: (ha-080809) KVM machine creation complete!
	I0531 18:41:15.009497   33166 main.go:141] libmachine: (ha-080809) Calling .GetConfigRaw
	I0531 18:41:15.009988   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:15.010198   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:15.010337   33166 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 18:41:15.010363   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:41:15.011520   33166 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 18:41:15.011538   33166 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 18:41:15.011554   33166 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 18:41:15.011561   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.013441   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.013758   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.013785   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.013914   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.014118   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.014258   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.014397   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.014540   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.014759   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.014770   33166 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 18:41:15.120475   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:41:15.120497   33166 main.go:141] libmachine: Detecting the provisioner...
	I0531 18:41:15.120504   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.122856   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.123267   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.123297   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.123444   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.123687   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.123842   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.123963   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.124187   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.124400   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.124416   33166 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 18:41:15.229853   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 18:41:15.229955   33166 main.go:141] libmachine: found compatible host: buildroot
	I0531 18:41:15.229971   33166 main.go:141] libmachine: Provisioning with buildroot...
	I0531 18:41:15.229981   33166 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:41:15.230254   33166 buildroot.go:166] provisioning hostname "ha-080809"
	I0531 18:41:15.230276   33166 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:41:15.230455   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.232970   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.233528   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.233558   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.233678   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.233852   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.234033   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.234160   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.234350   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.234506   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.234519   33166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809 && echo "ha-080809" | sudo tee /etc/hostname
	I0531 18:41:15.355446   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:41:15.355476   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.358146   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.358430   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.358461   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.358736   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.358918   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.359114   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.359262   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.359428   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.359641   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.359666   33166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:41:15.474279   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:41:15.474324   33166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:41:15.474380   33166 buildroot.go:174] setting up certificates
	I0531 18:41:15.474398   33166 provision.go:84] configureAuth start
	I0531 18:41:15.474420   33166 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:41:15.474714   33166 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:41:15.477261   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.477587   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.477607   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.477767   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.479629   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.479967   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.479994   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.480087   33166 provision.go:143] copyHostCerts
	I0531 18:41:15.480115   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:41:15.480149   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:41:15.480156   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:41:15.480226   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:41:15.480318   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:41:15.480338   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:41:15.480343   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:41:15.480370   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:41:15.480430   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:41:15.480447   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:41:15.480451   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:41:15.480471   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:41:15.480527   33166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809 san=[127.0.0.1 192.168.39.2 ha-080809 localhost minikube]
	I0531 18:41:15.811140   33166 provision.go:177] copyRemoteCerts
	I0531 18:41:15.811195   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:41:15.811218   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.813750   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.814156   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.814188   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.814340   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.814546   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.814692   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.814795   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:15.900076   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:41:15.900150   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:41:15.924042   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:41:15.924110   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0531 18:41:15.946872   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:41:15.946935   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:41:15.969512   33166 provision.go:87] duration metric: took 495.099265ms to configureAuth
	I0531 18:41:15.969542   33166 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:41:15.969745   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:41:15.969815   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:15.972184   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.972480   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:15.972504   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:15.972854   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:15.973061   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.973228   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:15.973362   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:15.973500   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:15.973669   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:15.973689   33166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:41:16.234747   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:41:16.234779   33166 main.go:141] libmachine: Checking connection to Docker...
	I0531 18:41:16.234789   33166 main.go:141] libmachine: (ha-080809) Calling .GetURL
	I0531 18:41:16.235971   33166 main.go:141] libmachine: (ha-080809) DBG | Using libvirt version 6000000
	I0531 18:41:16.237976   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.238288   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.238312   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.238431   33166 main.go:141] libmachine: Docker is up and running!
	I0531 18:41:16.238447   33166 main.go:141] libmachine: Reticulating splines...
	I0531 18:41:16.238465   33166 client.go:171] duration metric: took 26.836260223s to LocalClient.Create
	I0531 18:41:16.238491   33166 start.go:167] duration metric: took 26.836327274s to libmachine.API.Create "ha-080809"
	I0531 18:41:16.238503   33166 start.go:293] postStartSetup for "ha-080809" (driver="kvm2")
	I0531 18:41:16.238515   33166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:41:16.238536   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.238740   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:41:16.238763   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:16.240487   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.240792   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.240817   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.240931   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:16.241109   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.241261   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:16.241392   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:16.323371   33166 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:41:16.327800   33166 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:41:16.327821   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:41:16.327876   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:41:16.327941   33166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:41:16.327951   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:41:16.328037   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:41:16.337706   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:41:16.362009   33166 start.go:296] duration metric: took 123.494571ms for postStartSetup
	I0531 18:41:16.362071   33166 main.go:141] libmachine: (ha-080809) Calling .GetConfigRaw
	I0531 18:41:16.362684   33166 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:41:16.365319   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.365749   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.365785   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.365931   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:41:16.366117   33166 start.go:128] duration metric: took 26.980548012s to createHost
	I0531 18:41:16.366141   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:16.368224   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.368499   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.368530   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.368652   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:16.368812   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.368986   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.369162   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:16.369319   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:41:16.369503   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:41:16.369516   33166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:41:16.473752   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717180876.456803167
	
	I0531 18:41:16.473771   33166 fix.go:216] guest clock: 1717180876.456803167
	I0531 18:41:16.473778   33166 fix.go:229] Guest: 2024-05-31 18:41:16.456803167 +0000 UTC Remote: 2024-05-31 18:41:16.366129714 +0000 UTC m=+27.078403575 (delta=90.673453ms)
	I0531 18:41:16.473795   33166 fix.go:200] guest clock delta is within tolerance: 90.673453ms
	I0531 18:41:16.473799   33166 start.go:83] releasing machines lock for "ha-080809", held for 27.088306457s
	I0531 18:41:16.473814   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.474049   33166 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:41:16.476439   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.476737   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.476763   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.476880   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.477418   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.477588   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:16.477664   33166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:41:16.477716   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:16.477811   33166 ssh_runner.go:195] Run: cat /version.json
	I0531 18:41:16.477831   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:16.480276   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.480519   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.480663   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.480691   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.480818   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:16.481003   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.481015   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:16.481038   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:16.481178   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:16.481203   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:16.481461   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:16.481522   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:16.481658   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:16.481836   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:16.558388   33166 ssh_runner.go:195] Run: systemctl --version
	I0531 18:41:16.581555   33166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:41:16.735341   33166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:41:16.742169   33166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:41:16.742241   33166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:41:16.758883   33166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 18:41:16.758913   33166 start.go:494] detecting cgroup driver to use...
	I0531 18:41:16.758977   33166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:41:16.775246   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:41:16.789674   33166 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:41:16.789732   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:41:16.803365   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:41:16.817683   33166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:41:16.929142   33166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:41:17.084971   33166 docker.go:233] disabling docker service ...
	I0531 18:41:17.085034   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:41:17.099168   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:41:17.112314   33166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:41:17.234520   33166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:41:17.353552   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:41:17.368462   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:41:17.386660   33166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:41:17.386733   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.397037   33166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:41:17.397097   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.407433   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.418037   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.428265   33166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:41:17.438851   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.448764   33166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.465560   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:41:17.475446   33166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:41:17.484144   33166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 18:41:17.484190   33166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 18:41:17.496427   33166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:41:17.505215   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:41:17.612656   33166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:41:17.744045   33166 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:41:17.744113   33166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:41:17.749941   33166 start.go:562] Will wait 60s for crictl version
	I0531 18:41:17.749999   33166 ssh_runner.go:195] Run: which crictl
	I0531 18:41:17.753924   33166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:41:17.794692   33166 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:41:17.794762   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:41:17.822710   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:41:17.854359   33166 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:41:17.855654   33166 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:41:17.858089   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:17.858369   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:17.858390   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:17.858627   33166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:41:17.862940   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:41:17.876330   33166 kubeadm.go:877] updating cluster {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 18:41:17.876427   33166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:41:17.876467   33166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:41:17.908946   33166 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.1". assuming images are not preloaded.
	I0531 18:41:17.909038   33166 ssh_runner.go:195] Run: which lz4
	I0531 18:41:17.913015   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0531 18:41:17.913099   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0531 18:41:17.917319   33166 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0531 18:41:17.917350   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (394537501 bytes)
	I0531 18:41:19.350031   33166 crio.go:462] duration metric: took 1.436953009s to copy over tarball
	I0531 18:41:19.350090   33166 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0531 18:41:21.442460   33166 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.092338648s)
	I0531 18:41:21.442485   33166 crio.go:469] duration metric: took 2.092431012s to extract the tarball
	I0531 18:41:21.442493   33166 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0531 18:41:21.483381   33166 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:41:21.527201   33166 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:41:21.527219   33166 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:41:21.527225   33166 kubeadm.go:928] updating node { 192.168.39.2 8443 v1.30.1 crio true true} ...
	I0531 18:41:21.527310   33166 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:41:21.527370   33166 ssh_runner.go:195] Run: crio config
	I0531 18:41:21.578632   33166 cni.go:84] Creating CNI manager for ""
	I0531 18:41:21.578651   33166 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0531 18:41:21.578659   33166 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 18:41:21.578680   33166 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-080809 NodeName:ha-080809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:41:21.578811   33166 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-080809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:41:21.578832   33166 kube-vip.go:115] generating kube-vip config ...
	I0531 18:41:21.578870   33166 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:41:21.596044   33166 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:41:21.596186   33166 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:41:21.596245   33166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:41:21.606538   33166 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:41:21.606614   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0531 18:41:21.616171   33166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0531 18:41:21.632817   33166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:41:21.649183   33166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0531 18:41:21.665287   33166 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0531 18:41:21.681277   33166 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:41:21.685130   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:41:21.697282   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:41:21.813833   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:41:21.831829   33166 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.2
	I0531 18:41:21.831855   33166 certs.go:194] generating shared ca certs ...
	I0531 18:41:21.831890   33166 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:21.832080   33166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:41:21.832137   33166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:41:21.832150   33166 certs.go:256] generating profile certs ...
	I0531 18:41:21.832228   33166 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:41:21.832247   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt with IP's: []
	I0531 18:41:21.960416   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt ...
	I0531 18:41:21.960441   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt: {Name:mkbb260ac793c3c3e537e9a7d865f3fd8518820e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:21.960604   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key ...
	I0531 18:41:21.960614   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key: {Name:mk77b95ba2c89b4a3593bb327817533591115dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:21.960684   33166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.fe027851
	I0531 18:41:21.960697   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.fe027851 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.254]
	I0531 18:41:22.023685   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.fe027851 ...
	I0531 18:41:22.023714   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.fe027851: {Name:mkf3e98cf3f75fbca228e9135ae86bb53400c854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:22.023884   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.fe027851 ...
	I0531 18:41:22.023899   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.fe027851: {Name:mka3a76e786ac918919a584640713586090b6259 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:22.023999   33166 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.fe027851 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:41:22.024079   33166 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.fe027851 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:41:22.024131   33166 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:41:22.024145   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt with IP's: []
	I0531 18:41:22.207623   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt ...
	I0531 18:41:22.207649   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt: {Name:mk9e3dcf53c1b719a3cc7796af13396f6fa7562e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:22.207804   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key ...
	I0531 18:41:22.207820   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key: {Name:mka4197e6978032a95184f85d2aa333ada018755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:22.207923   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:41:22.207947   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:41:22.207964   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:41:22.207982   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:41:22.207999   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:41:22.208017   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:41:22.208035   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:41:22.208052   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:41:22.208118   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:41:22.208166   33166 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:41:22.208182   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:41:22.208211   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:41:22.208240   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:41:22.208271   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:41:22.208322   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:41:22.208359   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:41:22.208375   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:41:22.208392   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:41:22.208935   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:41:22.234417   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:41:22.257894   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:41:22.280749   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:41:22.303828   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0531 18:41:22.328316   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 18:41:22.352631   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:41:22.376753   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:41:22.401355   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:41:22.425203   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:41:22.448204   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:41:22.471183   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:41:22.487467   33166 ssh_runner.go:195] Run: openssl version
	I0531 18:41:22.493236   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:41:22.504146   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:41:22.508713   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:41:22.508763   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:41:22.514472   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:41:22.525306   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:41:22.536115   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:41:22.540512   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:41:22.540545   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:41:22.546176   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:41:22.557406   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:41:22.569239   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:41:22.574119   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:41:22.574170   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:41:22.580157   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:41:22.592323   33166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:41:22.596787   33166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 18:41:22.596853   33166 kubeadm.go:391] StartCluster: {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:41:22.596923   33166 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:41:22.597004   33166 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:41:22.645080   33166 cri.go:89] found id: ""
	I0531 18:41:22.645153   33166 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 18:41:22.658057   33166 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 18:41:22.670288   33166 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 18:41:22.682617   33166 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 18:41:22.682646   33166 kubeadm.go:156] found existing configuration files:
	
	I0531 18:41:22.682689   33166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 18:41:22.693401   33166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0531 18:41:22.693455   33166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0531 18:41:22.706085   33166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 18:41:22.715313   33166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0531 18:41:22.715356   33166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0531 18:41:22.724902   33166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 18:41:22.734468   33166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0531 18:41:22.734516   33166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 18:41:22.744080   33166 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 18:41:22.753365   33166 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0531 18:41:22.753424   33166 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 18:41:22.762995   33166 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0531 18:41:23.013907   33166 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 18:41:34.048356   33166 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0531 18:41:34.048424   33166 kubeadm.go:309] [preflight] Running pre-flight checks
	I0531 18:41:34.048512   33166 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 18:41:34.048626   33166 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 18:41:34.048762   33166 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 18:41:34.048869   33166 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 18:41:34.050299   33166 out.go:204]   - Generating certificates and keys ...
	I0531 18:41:34.050371   33166 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0531 18:41:34.050439   33166 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0531 18:41:34.050546   33166 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 18:41:34.050626   33166 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0531 18:41:34.050714   33166 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0531 18:41:34.050798   33166 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0531 18:41:34.050876   33166 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0531 18:41:34.050999   33166 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-080809 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0531 18:41:34.051072   33166 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0531 18:41:34.051197   33166 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-080809 localhost] and IPs [192.168.39.2 127.0.0.1 ::1]
	I0531 18:41:34.051295   33166 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 18:41:34.051378   33166 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 18:41:34.051440   33166 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0531 18:41:34.051514   33166 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 18:41:34.051576   33166 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 18:41:34.051675   33166 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0531 18:41:34.051764   33166 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 18:41:34.051864   33166 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 18:41:34.051952   33166 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 18:41:34.052053   33166 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 18:41:34.052144   33166 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 18:41:34.053382   33166 out.go:204]   - Booting up control plane ...
	I0531 18:41:34.053464   33166 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 18:41:34.053539   33166 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 18:41:34.053612   33166 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 18:41:34.053719   33166 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 18:41:34.053844   33166 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 18:41:34.053915   33166 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0531 18:41:34.054043   33166 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0531 18:41:34.054140   33166 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0531 18:41:34.054228   33166 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.812743ms
	I0531 18:41:34.054326   33166 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0531 18:41:34.054422   33166 kubeadm.go:309] [api-check] The API server is healthy after 5.991381448s
	I0531 18:41:34.054579   33166 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0531 18:41:34.054693   33166 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0531 18:41:34.054772   33166 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0531 18:41:34.054942   33166 kubeadm.go:309] [mark-control-plane] Marking the node ha-080809 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0531 18:41:34.055017   33166 kubeadm.go:309] [bootstrap-token] Using token: 08gm8l.f2qrzvv9b1zsgyhw
	I0531 18:41:34.056257   33166 out.go:204]   - Configuring RBAC rules ...
	I0531 18:41:34.056364   33166 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0531 18:41:34.056448   33166 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0531 18:41:34.056584   33166 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0531 18:41:34.056717   33166 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0531 18:41:34.056843   33166 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0531 18:41:34.056942   33166 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0531 18:41:34.057078   33166 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0531 18:41:34.057147   33166 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0531 18:41:34.057220   33166 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0531 18:41:34.057234   33166 kubeadm.go:309] 
	I0531 18:41:34.057282   33166 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0531 18:41:34.057289   33166 kubeadm.go:309] 
	I0531 18:41:34.057354   33166 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0531 18:41:34.057360   33166 kubeadm.go:309] 
	I0531 18:41:34.057395   33166 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0531 18:41:34.057444   33166 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0531 18:41:34.057486   33166 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0531 18:41:34.057495   33166 kubeadm.go:309] 
	I0531 18:41:34.057546   33166 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0531 18:41:34.057552   33166 kubeadm.go:309] 
	I0531 18:41:34.057603   33166 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0531 18:41:34.057615   33166 kubeadm.go:309] 
	I0531 18:41:34.057660   33166 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0531 18:41:34.057727   33166 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0531 18:41:34.057788   33166 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0531 18:41:34.057796   33166 kubeadm.go:309] 
	I0531 18:41:34.057870   33166 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0531 18:41:34.057936   33166 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0531 18:41:34.057947   33166 kubeadm.go:309] 
	I0531 18:41:34.058052   33166 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 08gm8l.f2qrzvv9b1zsgyhw \
	I0531 18:41:34.058141   33166 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 \
	I0531 18:41:34.058161   33166 kubeadm.go:309] 	--control-plane 
	I0531 18:41:34.058167   33166 kubeadm.go:309] 
	I0531 18:41:34.058245   33166 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0531 18:41:34.058251   33166 kubeadm.go:309] 
	I0531 18:41:34.058344   33166 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 08gm8l.f2qrzvv9b1zsgyhw \
	I0531 18:41:34.058471   33166 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 
	I0531 18:41:34.058483   33166 cni.go:84] Creating CNI manager for ""
	I0531 18:41:34.058488   33166 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0531 18:41:34.059834   33166 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0531 18:41:34.061026   33166 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0531 18:41:34.066612   33166 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0531 18:41:34.066626   33166 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0531 18:41:34.084600   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0531 18:41:34.432458   33166 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0531 18:41:34.432539   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:34.432596   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-080809 minikube.k8s.io/updated_at=2024_05_31T18_41_34_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404 minikube.k8s.io/name=ha-080809 minikube.k8s.io/primary=true
	I0531 18:41:34.647235   33166 ops.go:34] apiserver oom_adj: -16
	I0531 18:41:34.648243   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:35.149072   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:35.648902   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:36.148558   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:36.648301   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:37.149179   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:37.648615   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:38.149220   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:38.648981   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:39.149074   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:39.648859   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:40.149141   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:40.648381   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:41.148640   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:41.648259   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:42.149154   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:42.648390   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:43.148383   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:43.648814   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:44.148873   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:44.648660   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:45.149112   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:45.648543   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:46.148849   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:46.648488   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:47.149158   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0531 18:41:47.243429   33166 kubeadm.go:1107] duration metric: took 12.810950463s to wait for elevateKubeSystemPrivileges
	W0531 18:41:47.243471   33166 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0531 18:41:47.243478   33166 kubeadm.go:393] duration metric: took 24.646633116s to StartCluster
	I0531 18:41:47.243493   33166 settings.go:142] acquiring lock: {Name:mkf1adeb57ad3d510bae126cde364cbda2de4f63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:47.243576   33166 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:41:47.244432   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/kubeconfig: {Name:mk4025a54f7d507177124d778f591e88161dcba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:41:47.244678   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0531 18:41:47.244709   33166 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:41:47.244731   33166 start.go:240] waiting for startup goroutines ...
	I0531 18:41:47.244743   33166 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0531 18:41:47.244805   33166 addons.go:69] Setting storage-provisioner=true in profile "ha-080809"
	I0531 18:41:47.244819   33166 addons.go:69] Setting default-storageclass=true in profile "ha-080809"
	I0531 18:41:47.244834   33166 addons.go:234] Setting addon storage-provisioner=true in "ha-080809"
	I0531 18:41:47.244858   33166 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-080809"
	I0531 18:41:47.244862   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:41:47.244934   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:41:47.245270   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.245297   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.245347   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.245396   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.260442   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I0531 18:41:47.260520   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34667
	I0531 18:41:47.260930   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.260998   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.261443   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.261458   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.261586   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.261607   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.261762   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.261876   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.262083   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:41:47.262341   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.262371   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.264404   33166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:41:47.264758   33166 kapi.go:59] client config for ha-080809: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt", KeyFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key", CAFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0531 18:41:47.265376   33166 cert_rotation.go:137] Starting client certificate rotation controller
	I0531 18:41:47.265597   33166 addons.go:234] Setting addon default-storageclass=true in "ha-080809"
	I0531 18:41:47.265638   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:41:47.265999   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.266028   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.276877   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I0531 18:41:47.277307   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.277735   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.277760   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.278055   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.278239   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:41:47.279741   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:47.281787   33166 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 18:41:47.280229   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34513
	I0531 18:41:47.283158   33166 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:41:47.283178   33166 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0531 18:41:47.283200   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:47.283555   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.284131   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.284147   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.284574   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.285144   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:47.285169   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:47.286111   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:47.286529   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:47.286549   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:47.286796   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:47.286946   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:47.287100   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:47.287222   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:47.299067   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41333
	I0531 18:41:47.299371   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:47.299879   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:47.299902   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:47.300264   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:47.300452   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:41:47.301739   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:41:47.302065   33166 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0531 18:41:47.302082   33166 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0531 18:41:47.302108   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:41:47.304732   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:47.305154   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:41:47.305185   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:41:47.305327   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:41:47.305517   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:41:47.305672   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:41:47.305806   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:41:47.348574   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0531 18:41:47.457360   33166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0531 18:41:47.565378   33166 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0531 18:41:47.886036   33166 start.go:946] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0531 18:41:48.152663   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.152686   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.152829   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.152844   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.153063   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.153081   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.153091   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.153098   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.153168   33166 main.go:141] libmachine: (ha-080809) DBG | Closing plugin on server side
	I0531 18:41:48.153169   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.153220   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.153231   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.153241   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.153323   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.153334   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.153435   33166 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0531 18:41:48.153442   33166 round_trippers.go:469] Request Headers:
	I0531 18:41:48.153453   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:41:48.153468   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:41:48.153637   33166 main.go:141] libmachine: (ha-080809) DBG | Closing plugin on server side
	I0531 18:41:48.153642   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.153663   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.164286   33166 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0531 18:41:48.165069   33166 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0531 18:41:48.165085   33166 round_trippers.go:469] Request Headers:
	I0531 18:41:48.165095   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:41:48.165099   33166 round_trippers.go:473]     Content-Type: application/json
	I0531 18:41:48.165103   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:41:48.168547   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:41:48.168680   33166 main.go:141] libmachine: Making call to close driver server
	I0531 18:41:48.168701   33166 main.go:141] libmachine: (ha-080809) Calling .Close
	I0531 18:41:48.169047   33166 main.go:141] libmachine: Successfully made call to close driver server
	I0531 18:41:48.169070   33166 main.go:141] libmachine: Making call to close connection to plugin binary
	I0531 18:41:48.170791   33166 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0531 18:41:48.169052   33166 main.go:141] libmachine: (ha-080809) DBG | Closing plugin on server side
	I0531 18:41:48.172057   33166 addons.go:510] duration metric: took 927.311258ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0531 18:41:48.172092   33166 start.go:245] waiting for cluster config update ...
	I0531 18:41:48.172109   33166 start.go:254] writing updated cluster config ...
	I0531 18:41:48.173752   33166 out.go:177] 
	I0531 18:41:48.175529   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:41:48.175621   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:41:48.177171   33166 out.go:177] * Starting "ha-080809-m02" control-plane node in "ha-080809" cluster
	I0531 18:41:48.178578   33166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:41:48.178603   33166 cache.go:56] Caching tarball of preloaded images
	I0531 18:41:48.178685   33166 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:41:48.178700   33166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:41:48.178789   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:41:48.179425   33166 start.go:360] acquireMachinesLock for ha-080809-m02: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:41:48.179512   33166 start.go:364] duration metric: took 30.457µs to acquireMachinesLock for "ha-080809-m02"
	I0531 18:41:48.179537   33166 start.go:93] Provisioning new machine with config: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:41:48.179618   33166 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0531 18:41:48.181016   33166 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0531 18:41:48.181087   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:41:48.181108   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:41:48.195758   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45699
	I0531 18:41:48.196241   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:41:48.196680   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:41:48.196708   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:41:48.197109   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:41:48.197274   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetMachineName
	I0531 18:41:48.197449   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:41:48.197604   33166 start.go:159] libmachine.API.Create for "ha-080809" (driver="kvm2")
	I0531 18:41:48.197627   33166 client.go:168] LocalClient.Create starting
	I0531 18:41:48.197660   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 18:41:48.197700   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:41:48.197734   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:41:48.197805   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 18:41:48.197831   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:41:48.197852   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:41:48.197883   33166 main.go:141] libmachine: Running pre-create checks...
	I0531 18:41:48.197897   33166 main.go:141] libmachine: (ha-080809-m02) Calling .PreCreateCheck
	I0531 18:41:48.198069   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetConfigRaw
	I0531 18:41:48.198434   33166 main.go:141] libmachine: Creating machine...
	I0531 18:41:48.198445   33166 main.go:141] libmachine: (ha-080809-m02) Calling .Create
	I0531 18:41:48.198569   33166 main.go:141] libmachine: (ha-080809-m02) Creating KVM machine...
	I0531 18:41:48.199911   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found existing default KVM network
	I0531 18:41:48.200038   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found existing private KVM network mk-ha-080809
	I0531 18:41:48.200157   33166 main.go:141] libmachine: (ha-080809-m02) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02 ...
	I0531 18:41:48.200198   33166 main.go:141] libmachine: (ha-080809-m02) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 18:41:48.200249   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:48.200161   33574 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:41:48.200379   33166 main.go:141] libmachine: (ha-080809-m02) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 18:41:48.411679   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:48.411560   33574 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa...
	I0531 18:41:48.922794   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:48.922687   33574 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/ha-080809-m02.rawdisk...
	I0531 18:41:48.922829   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Writing magic tar header
	I0531 18:41:48.922842   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Writing SSH key tar header
	I0531 18:41:48.922855   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:48.922789   33574 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02 ...
	I0531 18:41:48.922888   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02
	I0531 18:41:48.922912   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 18:41:48.922947   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:41:48.922959   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02 (perms=drwx------)
	I0531 18:41:48.922968   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 18:41:48.922976   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 18:41:48.922986   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 18:41:48.923002   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 18:41:48.923013   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 18:41:48.923026   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home/jenkins
	I0531 18:41:48.923037   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Checking permissions on dir: /home
	I0531 18:41:48.923064   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 18:41:48.923077   33166 main.go:141] libmachine: (ha-080809-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 18:41:48.923081   33166 main.go:141] libmachine: (ha-080809-m02) Creating domain...
	I0531 18:41:48.923091   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Skipping /home - not owner
	I0531 18:41:48.923891   33166 main.go:141] libmachine: (ha-080809-m02) define libvirt domain using xml: 
	I0531 18:41:48.923913   33166 main.go:141] libmachine: (ha-080809-m02) <domain type='kvm'>
	I0531 18:41:48.923922   33166 main.go:141] libmachine: (ha-080809-m02)   <name>ha-080809-m02</name>
	I0531 18:41:48.923936   33166 main.go:141] libmachine: (ha-080809-m02)   <memory unit='MiB'>2200</memory>
	I0531 18:41:48.923948   33166 main.go:141] libmachine: (ha-080809-m02)   <vcpu>2</vcpu>
	I0531 18:41:48.923955   33166 main.go:141] libmachine: (ha-080809-m02)   <features>
	I0531 18:41:48.923964   33166 main.go:141] libmachine: (ha-080809-m02)     <acpi/>
	I0531 18:41:48.923975   33166 main.go:141] libmachine: (ha-080809-m02)     <apic/>
	I0531 18:41:48.923992   33166 main.go:141] libmachine: (ha-080809-m02)     <pae/>
	I0531 18:41:48.924000   33166 main.go:141] libmachine: (ha-080809-m02)     
	I0531 18:41:48.924010   33166 main.go:141] libmachine: (ha-080809-m02)   </features>
	I0531 18:41:48.924025   33166 main.go:141] libmachine: (ha-080809-m02)   <cpu mode='host-passthrough'>
	I0531 18:41:48.924037   33166 main.go:141] libmachine: (ha-080809-m02)   
	I0531 18:41:48.924047   33166 main.go:141] libmachine: (ha-080809-m02)   </cpu>
	I0531 18:41:48.924056   33166 main.go:141] libmachine: (ha-080809-m02)   <os>
	I0531 18:41:48.924068   33166 main.go:141] libmachine: (ha-080809-m02)     <type>hvm</type>
	I0531 18:41:48.924081   33166 main.go:141] libmachine: (ha-080809-m02)     <boot dev='cdrom'/>
	I0531 18:41:48.924096   33166 main.go:141] libmachine: (ha-080809-m02)     <boot dev='hd'/>
	I0531 18:41:48.924108   33166 main.go:141] libmachine: (ha-080809-m02)     <bootmenu enable='no'/>
	I0531 18:41:48.924119   33166 main.go:141] libmachine: (ha-080809-m02)   </os>
	I0531 18:41:48.924129   33166 main.go:141] libmachine: (ha-080809-m02)   <devices>
	I0531 18:41:48.924139   33166 main.go:141] libmachine: (ha-080809-m02)     <disk type='file' device='cdrom'>
	I0531 18:41:48.924153   33166 main.go:141] libmachine: (ha-080809-m02)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/boot2docker.iso'/>
	I0531 18:41:48.924169   33166 main.go:141] libmachine: (ha-080809-m02)       <target dev='hdc' bus='scsi'/>
	I0531 18:41:48.924194   33166 main.go:141] libmachine: (ha-080809-m02)       <readonly/>
	I0531 18:41:48.924205   33166 main.go:141] libmachine: (ha-080809-m02)     </disk>
	I0531 18:41:48.924216   33166 main.go:141] libmachine: (ha-080809-m02)     <disk type='file' device='disk'>
	I0531 18:41:48.924230   33166 main.go:141] libmachine: (ha-080809-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 18:41:48.924259   33166 main.go:141] libmachine: (ha-080809-m02)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/ha-080809-m02.rawdisk'/>
	I0531 18:41:48.924284   33166 main.go:141] libmachine: (ha-080809-m02)       <target dev='hda' bus='virtio'/>
	I0531 18:41:48.924298   33166 main.go:141] libmachine: (ha-080809-m02)     </disk>
	I0531 18:41:48.924312   33166 main.go:141] libmachine: (ha-080809-m02)     <interface type='network'>
	I0531 18:41:48.924336   33166 main.go:141] libmachine: (ha-080809-m02)       <source network='mk-ha-080809'/>
	I0531 18:41:48.924346   33166 main.go:141] libmachine: (ha-080809-m02)       <model type='virtio'/>
	I0531 18:41:48.924355   33166 main.go:141] libmachine: (ha-080809-m02)     </interface>
	I0531 18:41:48.924372   33166 main.go:141] libmachine: (ha-080809-m02)     <interface type='network'>
	I0531 18:41:48.924394   33166 main.go:141] libmachine: (ha-080809-m02)       <source network='default'/>
	I0531 18:41:48.924409   33166 main.go:141] libmachine: (ha-080809-m02)       <model type='virtio'/>
	I0531 18:41:48.924419   33166 main.go:141] libmachine: (ha-080809-m02)     </interface>
	I0531 18:41:48.924429   33166 main.go:141] libmachine: (ha-080809-m02)     <serial type='pty'>
	I0531 18:41:48.924438   33166 main.go:141] libmachine: (ha-080809-m02)       <target port='0'/>
	I0531 18:41:48.924446   33166 main.go:141] libmachine: (ha-080809-m02)     </serial>
	I0531 18:41:48.924452   33166 main.go:141] libmachine: (ha-080809-m02)     <console type='pty'>
	I0531 18:41:48.924459   33166 main.go:141] libmachine: (ha-080809-m02)       <target type='serial' port='0'/>
	I0531 18:41:48.924466   33166 main.go:141] libmachine: (ha-080809-m02)     </console>
	I0531 18:41:48.924472   33166 main.go:141] libmachine: (ha-080809-m02)     <rng model='virtio'>
	I0531 18:41:48.924488   33166 main.go:141] libmachine: (ha-080809-m02)       <backend model='random'>/dev/random</backend>
	I0531 18:41:48.924499   33166 main.go:141] libmachine: (ha-080809-m02)     </rng>
	I0531 18:41:48.924507   33166 main.go:141] libmachine: (ha-080809-m02)     
	I0531 18:41:48.924517   33166 main.go:141] libmachine: (ha-080809-m02)     
	I0531 18:41:48.924525   33166 main.go:141] libmachine: (ha-080809-m02)   </devices>
	I0531 18:41:48.924534   33166 main.go:141] libmachine: (ha-080809-m02) </domain>
	I0531 18:41:48.924544   33166 main.go:141] libmachine: (ha-080809-m02) 
	I0531 18:41:48.930354   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:09:1b:f4 in network default
	I0531 18:41:48.930849   33166 main.go:141] libmachine: (ha-080809-m02) Ensuring networks are active...
	I0531 18:41:48.930884   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:48.931590   33166 main.go:141] libmachine: (ha-080809-m02) Ensuring network default is active
	I0531 18:41:48.931988   33166 main.go:141] libmachine: (ha-080809-m02) Ensuring network mk-ha-080809 is active
	I0531 18:41:48.932418   33166 main.go:141] libmachine: (ha-080809-m02) Getting domain xml...
	I0531 18:41:48.933230   33166 main.go:141] libmachine: (ha-080809-m02) Creating domain...
	I0531 18:41:50.098222   33166 main.go:141] libmachine: (ha-080809-m02) Waiting to get IP...
	I0531 18:41:50.098929   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:50.099311   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:50.099368   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:50.099298   33574 retry.go:31] will retry after 261.984814ms: waiting for machine to come up
	I0531 18:41:50.362946   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:50.363453   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:50.363487   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:50.363407   33574 retry.go:31] will retry after 282.991286ms: waiting for machine to come up
	I0531 18:41:50.647939   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:50.648489   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:50.648514   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:50.648433   33574 retry.go:31] will retry after 294.33331ms: waiting for machine to come up
	I0531 18:41:50.943773   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:50.944279   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:50.944304   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:50.944254   33574 retry.go:31] will retry after 450.31315ms: waiting for machine to come up
	I0531 18:41:51.395954   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:51.396412   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:51.396451   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:51.396368   33574 retry.go:31] will retry after 507.346302ms: waiting for machine to come up
	I0531 18:41:51.905205   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:51.905754   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:51.905774   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:51.905701   33574 retry.go:31] will retry after 715.723902ms: waiting for machine to come up
	I0531 18:41:52.623538   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:52.623889   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:52.623915   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:52.623859   33574 retry.go:31] will retry after 994.815511ms: waiting for machine to come up
	I0531 18:41:53.621021   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:53.621493   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:53.621527   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:53.621438   33574 retry.go:31] will retry after 1.011563252s: waiting for machine to come up
	I0531 18:41:54.634371   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:54.634848   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:54.634875   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:54.634824   33574 retry.go:31] will retry after 1.753029158s: waiting for machine to come up
	I0531 18:41:56.388945   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:56.389369   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:56.389395   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:56.389342   33574 retry.go:31] will retry after 1.744688574s: waiting for machine to come up
	I0531 18:41:58.135207   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:41:58.135711   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:41:58.135745   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:41:58.135653   33574 retry.go:31] will retry after 2.898419717s: waiting for machine to come up
	I0531 18:42:01.037050   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:01.037455   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:42:01.037472   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:42:01.037440   33574 retry.go:31] will retry after 2.809334691s: waiting for machine to come up
	I0531 18:42:03.848654   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:03.849111   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:42:03.849140   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:42:03.849055   33574 retry.go:31] will retry after 3.131942057s: waiting for machine to come up
	I0531 18:42:06.982387   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:06.982746   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find current IP address of domain ha-080809-m02 in network mk-ha-080809
	I0531 18:42:06.982776   33166 main.go:141] libmachine: (ha-080809-m02) DBG | I0531 18:42:06.982702   33574 retry.go:31] will retry after 5.344270754s: waiting for machine to come up
	I0531 18:42:12.331422   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:12.331856   33166 main.go:141] libmachine: (ha-080809-m02) Found IP for machine: 192.168.39.51
	I0531 18:42:12.331888   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has current primary IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:12.331920   33166 main.go:141] libmachine: (ha-080809-m02) Reserving static IP address...
	I0531 18:42:12.332240   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find host DHCP lease matching {name: "ha-080809-m02", mac: "52:54:00:38:66:ad", ip: "192.168.39.51"} in network mk-ha-080809
	I0531 18:42:12.403133   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Getting to WaitForSSH function...
	I0531 18:42:12.403164   33166 main.go:141] libmachine: (ha-080809-m02) Reserved static IP address: 192.168.39.51
	I0531 18:42:12.403175   33166 main.go:141] libmachine: (ha-080809-m02) Waiting for SSH to be available...
	I0531 18:42:12.405482   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:12.405817   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809
	I0531 18:42:12.405842   33166 main.go:141] libmachine: (ha-080809-m02) DBG | unable to find defined IP address of network mk-ha-080809 interface with MAC address 52:54:00:38:66:ad
	I0531 18:42:12.405975   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using SSH client type: external
	I0531 18:42:12.406000   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa (-rw-------)
	I0531 18:42:12.406029   33166 main.go:141] libmachine: (ha-080809-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:42:12.406048   33166 main.go:141] libmachine: (ha-080809-m02) DBG | About to run SSH command:
	I0531 18:42:12.406059   33166 main.go:141] libmachine: (ha-080809-m02) DBG | exit 0
	I0531 18:42:12.409696   33166 main.go:141] libmachine: (ha-080809-m02) DBG | SSH cmd err, output: exit status 255: 
	I0531 18:42:12.409717   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0531 18:42:12.409727   33166 main.go:141] libmachine: (ha-080809-m02) DBG | command : exit 0
	I0531 18:42:12.409780   33166 main.go:141] libmachine: (ha-080809-m02) DBG | err     : exit status 255
	I0531 18:42:12.409800   33166 main.go:141] libmachine: (ha-080809-m02) DBG | output  : 
	I0531 18:42:15.411838   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Getting to WaitForSSH function...
	I0531 18:42:15.414210   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.414631   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.414663   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.414786   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using SSH client type: external
	I0531 18:42:15.414811   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa (-rw-------)
	I0531 18:42:15.414836   33166 main.go:141] libmachine: (ha-080809-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.51 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:42:15.414852   33166 main.go:141] libmachine: (ha-080809-m02) DBG | About to run SSH command:
	I0531 18:42:15.414870   33166 main.go:141] libmachine: (ha-080809-m02) DBG | exit 0
	I0531 18:42:15.537201   33166 main.go:141] libmachine: (ha-080809-m02) DBG | SSH cmd err, output: <nil>: 
	I0531 18:42:15.537405   33166 main.go:141] libmachine: (ha-080809-m02) KVM machine creation complete!
	I0531 18:42:15.537722   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetConfigRaw
	I0531 18:42:15.538255   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:15.538434   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:15.538617   33166 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 18:42:15.538630   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 18:42:15.539803   33166 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 18:42:15.539822   33166 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 18:42:15.539830   33166 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 18:42:15.539839   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.541848   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.542129   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.542165   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.542303   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:15.542459   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.542590   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.542717   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:15.542880   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:15.543091   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:15.543103   33166 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 18:42:15.640427   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:42:15.640448   33166 main.go:141] libmachine: Detecting the provisioner...
	I0531 18:42:15.640455   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.642974   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.643355   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.643385   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.643512   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:15.643699   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.643853   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.644015   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:15.644189   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:15.644339   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:15.644349   33166 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 18:42:15.745863   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 18:42:15.745937   33166 main.go:141] libmachine: found compatible host: buildroot
	I0531 18:42:15.745951   33166 main.go:141] libmachine: Provisioning with buildroot...
	I0531 18:42:15.745963   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetMachineName
	I0531 18:42:15.746222   33166 buildroot.go:166] provisioning hostname "ha-080809-m02"
	I0531 18:42:15.746243   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetMachineName
	I0531 18:42:15.746434   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.748775   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.749092   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.749110   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.749302   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:15.749453   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.749615   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.749747   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:15.749888   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:15.750058   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:15.750069   33166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809-m02 && echo "ha-080809-m02" | sudo tee /etc/hostname
	I0531 18:42:15.866636   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809-m02
	
	I0531 18:42:15.866667   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.869486   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.869851   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.869879   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.870082   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:15.870264   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.870401   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:15.870499   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:15.870616   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:15.870830   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:15.870850   33166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:42:15.978458   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:42:15.978498   33166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:42:15.978519   33166 buildroot.go:174] setting up certificates
	I0531 18:42:15.978543   33166 provision.go:84] configureAuth start
	I0531 18:42:15.978558   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetMachineName
	I0531 18:42:15.978824   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:42:15.981244   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.981530   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.981572   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.981708   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:15.983573   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.983926   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:15.983958   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:15.984074   33166 provision.go:143] copyHostCerts
	I0531 18:42:15.984107   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:42:15.984147   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:42:15.984159   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:42:15.984237   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:42:15.984333   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:42:15.984358   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:42:15.984365   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:42:15.984405   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:42:15.984469   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:42:15.984493   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:42:15.984502   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:42:15.984533   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:42:15.984597   33166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809-m02 san=[127.0.0.1 192.168.39.51 ha-080809-m02 localhost minikube]
	I0531 18:42:16.325482   33166 provision.go:177] copyRemoteCerts
	I0531 18:42:16.325546   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:42:16.325575   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.327934   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.328161   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.328199   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.328332   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.328510   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.328645   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.328785   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:42:16.406957   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:42:16.407024   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0531 18:42:16.430210   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:42:16.430255   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 18:42:16.453025   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:42:16.453079   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:42:16.475768   33166 provision.go:87] duration metric: took 497.215522ms to configureAuth
	I0531 18:42:16.475793   33166 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:42:16.475950   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:42:16.476058   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.478460   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.478785   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.478817   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.478962   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.479159   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.479341   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.479507   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.479682   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:16.479878   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:16.479895   33166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:42:16.738800   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:42:16.738832   33166 main.go:141] libmachine: Checking connection to Docker...
	I0531 18:42:16.738842   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetURL
	I0531 18:42:16.740084   33166 main.go:141] libmachine: (ha-080809-m02) DBG | Using libvirt version 6000000
	I0531 18:42:16.742151   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.742474   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.742502   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.742652   33166 main.go:141] libmachine: Docker is up and running!
	I0531 18:42:16.742669   33166 main.go:141] libmachine: Reticulating splines...
	I0531 18:42:16.742677   33166 client.go:171] duration metric: took 28.545039675s to LocalClient.Create
	I0531 18:42:16.742705   33166 start.go:167] duration metric: took 28.54510089s to libmachine.API.Create "ha-080809"
	I0531 18:42:16.742718   33166 start.go:293] postStartSetup for "ha-080809-m02" (driver="kvm2")
	I0531 18:42:16.742732   33166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:42:16.742764   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.742985   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:42:16.743009   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.745196   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.745509   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.745534   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.745637   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.745808   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.745959   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.746087   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:42:16.827714   33166 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:42:16.832039   33166 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:42:16.832061   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:42:16.832132   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:42:16.832218   33166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:42:16.832228   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:42:16.832323   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:42:16.841372   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:42:16.863828   33166 start.go:296] duration metric: took 121.097824ms for postStartSetup
	I0531 18:42:16.863867   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetConfigRaw
	I0531 18:42:16.864426   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:42:16.866895   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.867277   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.867306   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.867558   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:42:16.867729   33166 start.go:128] duration metric: took 28.688101798s to createHost
	I0531 18:42:16.867750   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.869644   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.869998   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.870022   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.870173   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.870343   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.870464   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.870576   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.870708   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:42:16.870856   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.51 22 <nil> <nil>}
	I0531 18:42:16.870867   33166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:42:16.965713   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717180936.947375000
	
	I0531 18:42:16.965737   33166 fix.go:216] guest clock: 1717180936.947375000
	I0531 18:42:16.965747   33166 fix.go:229] Guest: 2024-05-31 18:42:16.947375 +0000 UTC Remote: 2024-05-31 18:42:16.867740303 +0000 UTC m=+87.580014158 (delta=79.634697ms)
	I0531 18:42:16.965762   33166 fix.go:200] guest clock delta is within tolerance: 79.634697ms
	I0531 18:42:16.965767   33166 start.go:83] releasing machines lock for "ha-080809-m02", held for 28.786242379s
	I0531 18:42:16.965786   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.966024   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:42:16.968685   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.969049   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.969072   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.971421   33166 out.go:177] * Found network options:
	I0531 18:42:16.973067   33166 out.go:177]   - NO_PROXY=192.168.39.2
	W0531 18:42:16.974146   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 18:42:16.974176   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.974677   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.974907   33166 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 18:42:16.975025   33166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:42:16.975064   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	W0531 18:42:16.975109   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 18:42:16.975183   33166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:42:16.975203   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 18:42:16.977519   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.977840   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.977867   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.977892   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.978028   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.978286   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:16.978308   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.978311   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:16.978462   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 18:42:16.978470   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.978650   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 18:42:16.978675   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:42:16.978756   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 18:42:16.978852   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 18:42:17.206888   33166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:42:17.213726   33166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:42:17.213783   33166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:42:17.230408   33166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 18:42:17.230429   33166 start.go:494] detecting cgroup driver to use...
	I0531 18:42:17.230480   33166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:42:17.248351   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:42:17.263119   33166 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:42:17.263175   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:42:17.278088   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:42:17.291629   33166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:42:17.410123   33166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:42:17.568505   33166 docker.go:233] disabling docker service ...
	I0531 18:42:17.568569   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:42:17.583008   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:42:17.595886   33166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:42:17.728751   33166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:42:17.857263   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:42:17.872288   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:42:17.891225   33166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:42:17.891286   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.903161   33166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:42:17.903218   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.913466   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.923296   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.933772   33166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:42:17.944134   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.954157   33166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.973486   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:42:17.983927   33166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:42:17.993512   33166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 18:42:17.993558   33166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 18:42:18.007549   33166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:42:18.016858   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:42:18.136685   33166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:42:18.293489   33166 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:42:18.293572   33166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:42:18.299706   33166 start.go:562] Will wait 60s for crictl version
	I0531 18:42:18.299765   33166 ssh_runner.go:195] Run: which crictl
	I0531 18:42:18.303554   33166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:42:18.344243   33166 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:42:18.344336   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:42:18.372229   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:42:18.400711   33166 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:42:18.401905   33166 out.go:177]   - env NO_PROXY=192.168.39.2
	I0531 18:42:18.402950   33166 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 18:42:18.405564   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:18.405931   33166 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 18:42:18.405958   33166 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 18:42:18.406102   33166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:42:18.410187   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:42:18.422505   33166 mustload.go:65] Loading cluster: ha-080809
	I0531 18:42:18.422677   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:42:18.422911   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:42:18.422951   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:42:18.437290   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0531 18:42:18.437653   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:42:18.438074   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:42:18.438095   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:42:18.438389   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:42:18.438565   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:42:18.440083   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:42:18.440383   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:42:18.440421   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:42:18.453809   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39325
	I0531 18:42:18.454184   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:42:18.454752   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:42:18.454771   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:42:18.455085   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:42:18.455271   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:42:18.455419   33166 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.51
	I0531 18:42:18.455430   33166 certs.go:194] generating shared ca certs ...
	I0531 18:42:18.455447   33166 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:42:18.455548   33166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:42:18.455581   33166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:42:18.455591   33166 certs.go:256] generating profile certs ...
	I0531 18:42:18.455654   33166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:42:18.455677   33166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.7ff96351
	I0531 18:42:18.455690   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.7ff96351 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.51 192.168.39.254]
	I0531 18:42:18.757388   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.7ff96351 ...
	I0531 18:42:18.757413   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.7ff96351: {Name:mk418eef2b1b5b94d8c904d182dbe3b875e7ee9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:42:18.757570   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.7ff96351 ...
	I0531 18:42:18.757584   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.7ff96351: {Name:mk358ccbd8c3bd1a84fbc48b0c1c3365acd5707c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:42:18.757675   33166 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.7ff96351 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:42:18.757811   33166 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.7ff96351 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:42:18.757934   33166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:42:18.757949   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:42:18.757962   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:42:18.757975   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:42:18.757987   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:42:18.757998   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:42:18.758009   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:42:18.758021   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:42:18.758035   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:42:18.758081   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:42:18.758106   33166 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:42:18.758115   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:42:18.758136   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:42:18.758157   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:42:18.758178   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:42:18.758213   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:42:18.758238   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:42:18.758256   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:42:18.758267   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:42:18.758296   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:42:18.761276   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:42:18.761655   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:42:18.761681   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:42:18.761855   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:42:18.762061   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:42:18.762213   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:42:18.762333   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:42:18.837272   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0531 18:42:18.843046   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0531 18:42:18.854961   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0531 18:42:18.859461   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0531 18:42:18.870098   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0531 18:42:18.873977   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0531 18:42:18.884607   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0531 18:42:18.888618   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0531 18:42:18.898666   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0531 18:42:18.902718   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0531 18:42:18.914386   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0531 18:42:18.918527   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0531 18:42:18.928780   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:42:18.954314   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:42:18.981552   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:42:19.008971   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:42:19.036307   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0531 18:42:19.071878   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 18:42:19.094850   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:42:19.118913   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:42:19.142462   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:42:19.166700   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:42:19.190475   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:42:19.213581   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0531 18:42:19.230209   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0531 18:42:19.246490   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0531 18:42:19.262496   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0531 18:42:19.279299   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0531 18:42:19.295348   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0531 18:42:19.311727   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0531 18:42:19.327108   33166 ssh_runner.go:195] Run: openssl version
	I0531 18:42:19.332562   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:42:19.342869   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:42:19.347193   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:42:19.347236   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:42:19.352873   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:42:19.363861   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:42:19.374647   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:42:19.378831   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:42:19.378866   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:42:19.384243   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:42:19.394718   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:42:19.405020   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:42:19.409207   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:42:19.409267   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:42:19.414941   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:42:19.428266   33166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:42:19.432619   33166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 18:42:19.432675   33166 kubeadm.go:928] updating node {m02 192.168.39.51 8443 v1.30.1 crio true true} ...
	I0531 18:42:19.432748   33166 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:42:19.432769   33166 kube-vip.go:115] generating kube-vip config ...
	I0531 18:42:19.432806   33166 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:42:19.450389   33166 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:42:19.450450   33166 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:42:19.450507   33166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:42:19.460335   33166 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0531 18:42:19.460396   33166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0531 18:42:19.469942   33166 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0531 18:42:19.469965   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0531 18:42:19.470035   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0531 18:42:19.470044   33166 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm
	I0531 18:42:19.470049   33166 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet
	I0531 18:42:19.474337   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0531 18:42:19.474365   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0531 18:42:20.790275   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0531 18:42:20.790371   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0531 18:42:20.795470   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0531 18:42:20.795503   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0531 18:42:20.936936   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:42:20.970319   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0531 18:42:20.970410   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0531 18:42:20.986184   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0531 18:42:20.986230   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0531 18:42:21.426654   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0531 18:42:21.436524   33166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0531 18:42:21.454081   33166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:42:21.471128   33166 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 18:42:21.488405   33166 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:42:21.492365   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:42:21.504513   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:42:21.623038   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:42:21.640365   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:42:21.640724   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:42:21.640773   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:42:21.655291   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35107
	I0531 18:42:21.655806   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:42:21.656334   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:42:21.656364   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:42:21.656637   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:42:21.656842   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:42:21.657012   33166 start.go:316] joinCluster: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:42:21.657109   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0531 18:42:21.657130   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:42:21.660196   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:42:21.660701   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:42:21.660732   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:42:21.660920   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:42:21.661112   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:42:21.661264   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:42:21.661401   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:42:21.831919   33166 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:42:21.831952   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bc22m0.jeczlb21vma4k8v2 --discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-080809-m02 --control-plane --apiserver-advertise-address=192.168.39.51 --apiserver-bind-port=8443"
	I0531 18:42:43.237812   33166 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bc22m0.jeczlb21vma4k8v2 --discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-080809-m02 --control-plane --apiserver-advertise-address=192.168.39.51 --apiserver-bind-port=8443": (21.405833616s)
	I0531 18:42:43.237858   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0531 18:42:43.790411   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-080809-m02 minikube.k8s.io/updated_at=2024_05_31T18_42_43_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404 minikube.k8s.io/name=ha-080809 minikube.k8s.io/primary=false
	I0531 18:42:43.953718   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-080809-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0531 18:42:44.073425   33166 start.go:318] duration metric: took 22.416410562s to joinCluster
	I0531 18:42:44.073491   33166 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:42:44.075048   33166 out.go:177] * Verifying Kubernetes components...
	I0531 18:42:44.073770   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:42:44.076497   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:42:44.324056   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:42:44.349686   33166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:42:44.350031   33166 kapi.go:59] client config for ha-080809: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt", KeyFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key", CAFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0531 18:42:44.350111   33166 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.2:8443
	I0531 18:42:44.350368   33166 node_ready.go:35] waiting up to 6m0s for node "ha-080809-m02" to be "Ready" ...
	I0531 18:42:44.350465   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:44.350475   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:44.350486   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:44.350495   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:44.362695   33166 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0531 18:42:44.850794   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:44.850820   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:44.850830   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:44.850834   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:44.854745   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:45.351052   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:45.351077   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:45.351085   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:45.351088   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:45.355423   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:45.850662   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:45.850740   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:45.850763   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:45.850780   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:45.858050   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:42:46.350633   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:46.350655   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:46.350664   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:46.350668   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:46.353852   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:46.354524   33166 node_ready.go:53] node "ha-080809-m02" has status "Ready":"False"
	I0531 18:42:46.851298   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:46.851317   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:46.851328   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:46.851332   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:46.854597   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:47.350916   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:47.350944   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:47.350957   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:47.350963   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:47.354470   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:47.850569   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:47.850592   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:47.850603   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:47.850608   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:47.856495   33166 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 18:42:48.351061   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:48.351081   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:48.351088   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:48.351092   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:48.354373   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:48.355120   33166 node_ready.go:53] node "ha-080809-m02" has status "Ready":"False"
	I0531 18:42:48.851198   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:48.851226   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:48.851237   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:48.851244   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:48.854416   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:49.351159   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:49.351184   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:49.351195   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:49.351202   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:49.354786   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:49.851306   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:49.851332   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:49.851342   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:49.851348   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:49.855199   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:50.351196   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:50.351220   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:50.351227   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:50.351231   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:50.354966   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:50.355755   33166 node_ready.go:53] node "ha-080809-m02" has status "Ready":"False"
	I0531 18:42:50.851037   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:50.851059   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:50.851065   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:50.851069   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:50.853880   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.350885   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:51.350913   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.350921   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.350925   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.363380   33166 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0531 18:42:51.850720   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:51.850745   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.850757   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.850763   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.855049   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:51.855558   33166 node_ready.go:49] node "ha-080809-m02" has status "Ready":"True"
	I0531 18:42:51.855575   33166 node_ready.go:38] duration metric: took 7.505175551s for node "ha-080809-m02" to be "Ready" ...
	I0531 18:42:51.855582   33166 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:42:51.855632   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:42:51.855641   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.855648   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.855654   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.859768   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:51.865277   33166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.865339   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr2wg
	I0531 18:42:51.865347   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.865354   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.865358   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.873796   33166 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0531 18:42:51.874368   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:51.874382   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.874389   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.874393   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.876561   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.877090   33166 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:51.877114   33166 pod_ready.go:81] duration metric: took 11.814298ms for pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.877125   33166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.877186   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbkpk
	I0531 18:42:51.877197   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.877207   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.877222   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.879561   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.880188   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:51.880204   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.880215   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.880218   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.882374   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.882875   33166 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:51.882898   33166 pod_ready.go:81] duration metric: took 5.765051ms for pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.882910   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.882961   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809
	I0531 18:42:51.882972   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.882980   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.882984   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.885252   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.885821   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:51.885841   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.885849   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.885856   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.888077   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.888647   33166 pod_ready.go:92] pod "etcd-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:51.888667   33166 pod_ready.go:81] duration metric: took 5.750422ms for pod "etcd-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.888677   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:51.888730   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:51.888739   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.888747   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.888751   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.890856   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:51.891457   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:51.891471   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:51.891477   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:51.891481   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:51.893391   33166 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 18:42:52.389441   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:52.389462   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:52.389469   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:52.389473   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:52.392807   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:52.393536   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:52.393552   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:52.393559   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:52.393564   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:52.395933   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:52.888976   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:52.888998   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:52.889006   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:52.889009   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:52.891993   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:52.892662   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:52.892677   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:52.892683   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:52.892686   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:52.895477   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.389535   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:53.389568   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.389578   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.389582   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.393164   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:53.394092   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:53.394110   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.394120   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.394127   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.396914   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.889839   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:42:53.889863   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.889870   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.889874   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.892989   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:53.893783   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:53.893797   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.893805   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.893810   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.896281   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.896811   33166 pod_ready.go:92] pod "etcd-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:53.896827   33166 pod_ready.go:81] duration metric: took 2.00814425s for pod "etcd-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:53.896840   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:53.896881   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809
	I0531 18:42:53.896888   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.896894   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.896899   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.899252   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.899994   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:53.900007   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:53.900014   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:53.900017   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:53.902370   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:53.902818   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:53.902833   33166 pod_ready.go:81] duration metric: took 5.987724ms for pod "kube-apiserver-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:53.902841   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:54.051219   33166 request.go:629] Waited for 148.326366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:54.051311   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:54.051321   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.051328   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.051334   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.054339   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:54.251307   33166 request.go:629] Waited for 196.349531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:54.251362   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:54.251381   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.251390   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.251396   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.254484   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:54.451091   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:54.451116   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.451127   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.451133   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.454734   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:54.650867   33166 request.go:629] Waited for 195.275171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:54.650930   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:54.650935   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.650942   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.650946   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.654049   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:54.903699   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:54.903721   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:54.903729   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:54.903732   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:54.906961   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:55.050923   33166 request.go:629] Waited for 143.259066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:55.050976   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:55.050981   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.050988   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.050991   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.054289   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:55.403839   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:42:55.403863   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.403874   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.403879   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.407563   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:55.451568   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:55.451592   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.451602   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.451608   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.454745   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:55.455446   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:55.455466   33166 pod_ready.go:81] duration metric: took 1.552619438s for pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:55.455478   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:55.650834   33166 request.go:629] Waited for 195.287771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809
	I0531 18:42:55.650901   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809
	I0531 18:42:55.650912   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.650923   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.650930   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.653798   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:55.850878   33166 request.go:629] Waited for 196.310301ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:55.850939   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:55.850946   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:55.850956   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:55.850966   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:55.853765   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:55.854398   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:55.854421   33166 pod_ready.go:81] duration metric: took 398.933767ms for pod "kube-controller-manager-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:55.854433   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:56.051459   33166 request.go:629] Waited for 196.914204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.051522   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.051528   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.051534   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.051540   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.054787   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:56.250949   33166 request.go:629] Waited for 195.283991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:56.251010   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:56.251017   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.251028   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.251033   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.253708   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:56.451520   33166 request.go:629] Waited for 96.309549ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.451584   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.451590   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.451597   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.451600   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.455066   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:56.651079   33166 request.go:629] Waited for 195.352948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:56.651137   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:56.651144   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.651163   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.651175   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.654377   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:56.855568   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:56.855591   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:56.855598   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:56.855602   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:56.858651   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:57.050986   33166 request.go:629] Waited for 191.303858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.051050   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.051056   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.051063   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.051069   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.054339   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:57.355386   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:57.355405   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.355413   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.355416   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.358572   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:57.451447   33166 request.go:629] Waited for 92.186798ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.451539   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.451554   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.451561   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.451565   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.458668   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:42:57.855525   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:57.855544   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.855552   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.855556   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.861544   33166 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 18:42:57.862763   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:57.862778   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:57.862786   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:57.862792   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:57.865541   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:57.866079   33166 pod_ready.go:102] pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace has status "Ready":"False"
	I0531 18:42:58.354931   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:58.354954   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:58.354962   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:58.354966   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:58.379542   33166 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0531 18:42:58.380302   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:58.380319   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:58.380326   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:58.380329   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:58.383420   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:58.855138   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:58.855163   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:58.855171   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:58.855176   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:58.857977   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:58.858525   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:58.858541   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:58.858548   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:58.858551   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:58.861345   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:59.354614   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:59.354641   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.354651   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.354656   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.357676   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:42:59.358392   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:59.358410   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.358419   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.358424   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.361127   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:59.855616   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:42:59.855641   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.855651   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.855658   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.859977   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:59.861278   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:42:59.861294   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.861302   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.861306   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.865613   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:59.866536   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:59.866554   33166 pod_ready.go:81] duration metric: took 4.012114731s for pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:59.866563   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dmg2w" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:59.866613   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmg2w
	I0531 18:42:59.866621   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.866628   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.866632   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.871593   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:42:59.872293   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:42:59.872307   33166 round_trippers.go:469] Request Headers:
	I0531 18:42:59.872314   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:42:59.872318   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:42:59.875022   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:42:59.875576   33166 pod_ready.go:92] pod "kube-proxy-dmg2w" in "kube-system" namespace has status "Ready":"True"
	I0531 18:42:59.875593   33166 pod_ready.go:81] duration metric: took 9.024494ms for pod "kube-proxy-dmg2w" in "kube-system" namespace to be "Ready" ...
	I0531 18:42:59.875601   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxq4h" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.050979   33166 request.go:629] Waited for 175.301531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxq4h
	I0531 18:43:00.051035   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxq4h
	I0531 18:43:00.051040   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.051046   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.051051   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.054574   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:00.251779   33166 request.go:629] Waited for 196.439477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:43:00.251828   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:43:00.251834   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.251841   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.251845   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.255454   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:00.256066   33166 pod_ready.go:92] pod "kube-proxy-wxq4h" in "kube-system" namespace has status "Ready":"True"
	I0531 18:43:00.256086   33166 pod_ready.go:81] duration metric: took 380.479214ms for pod "kube-proxy-wxq4h" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.256094   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.451226   33166 request.go:629] Waited for 195.068355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809
	I0531 18:43:00.451301   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809
	I0531 18:43:00.451306   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.451313   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.451321   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.454409   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:00.651355   33166 request.go:629] Waited for 196.379921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:43:00.651429   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:43:00.651434   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.651441   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.651446   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.654722   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:00.655487   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:43:00.655503   33166 pod_ready.go:81] duration metric: took 399.403426ms for pod "kube-scheduler-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.655512   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:00.851616   33166 request.go:629] Waited for 196.041128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m02
	I0531 18:43:00.851664   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m02
	I0531 18:43:00.851669   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:00.851677   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:00.851682   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:00.854518   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:43:01.051539   33166 request.go:629] Waited for 196.3823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:43:01.051602   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:43:01.051609   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.051620   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.051626   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.054933   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:01.055427   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:43:01.055447   33166 pod_ready.go:81] duration metric: took 399.927817ms for pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:43:01.055459   33166 pod_ready.go:38] duration metric: took 9.19986509s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:43:01.055477   33166 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:43:01.055537   33166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:43:01.083139   33166 api_server.go:72] duration metric: took 17.009610466s to wait for apiserver process to appear ...
	I0531 18:43:01.083164   33166 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:43:01.083194   33166 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0531 18:43:01.091980   33166 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0531 18:43:01.092058   33166 round_trippers.go:463] GET https://192.168.39.2:8443/version
	I0531 18:43:01.092070   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.092081   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.092090   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.093732   33166 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 18:43:01.093919   33166 api_server.go:141] control plane version: v1.30.1
	I0531 18:43:01.093939   33166 api_server.go:131] duration metric: took 10.766155ms to wait for apiserver health ...
	I0531 18:43:01.093950   33166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:43:01.251335   33166 request.go:629] Waited for 157.32352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:43:01.251411   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:43:01.251416   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.251424   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.251430   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.256259   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:43:01.260301   33166 system_pods.go:59] 17 kube-system pods found
	I0531 18:43:01.260325   33166 system_pods.go:61] "coredns-7db6d8ff4d-cr2wg" [9f90f810-88d7-443c-8537-1eb0c65417be] Running
	I0531 18:43:01.260329   33166 system_pods.go:61] "coredns-7db6d8ff4d-kbkpk" [1de171b4-ac9e-42d1-be02-d585d0d01587] Running
	I0531 18:43:01.260333   33166 system_pods.go:61] "etcd-ha-080809" [56a31782-586f-4031-b3af-14b3079465c9] Running
	I0531 18:43:01.260337   33166 system_pods.go:61] "etcd-ha-080809-m02" [6b4fa1c2-be34-4fa6-b600-f5214c23b9de] Running
	I0531 18:43:01.260340   33166 system_pods.go:61] "kindnet-6jqp4" [97ee357b-3e8c-411b-bc52-f6efdce9b8ed] Running
	I0531 18:43:01.260344   33166 system_pods.go:61] "kindnet-fzzfx" [1f8aa910-a7fc-44cf-88ad-dbf14049ac1d] Running
	I0531 18:43:01.260349   33166 system_pods.go:61] "kube-apiserver-ha-080809" [d927093c-9ff3-4405-989e-32aaead2e89a] Running
	I0531 18:43:01.260352   33166 system_pods.go:61] "kube-apiserver-ha-080809-m02" [0f1c03da-2e74-469e-8e35-4bb51cdb69e2] Running
	I0531 18:43:01.260356   33166 system_pods.go:61] "kube-controller-manager-ha-080809" [f267f923-1e85-413b-be64-a45fe7a71acf] Running
	I0531 18:43:01.260360   33166 system_pods.go:61] "kube-controller-manager-ha-080809-m02" [10424961-670c-4578-8be1-48480d19c17b] Running
	I0531 18:43:01.260363   33166 system_pods.go:61] "kube-proxy-dmg2w" [50600abe-9fae-41f8-8ff2-05c8407163c0] Running
	I0531 18:43:01.260366   33166 system_pods.go:61] "kube-proxy-wxq4h" [f8c4e91b-a913-4baf-ad50-44d14b9d6792] Running
	I0531 18:43:01.260369   33166 system_pods.go:61] "kube-scheduler-ha-080809" [e739cfb0-a779-4262-8617-8f66e77c9b14] Running
	I0531 18:43:01.260371   33166 system_pods.go:61] "kube-scheduler-ha-080809-m02" [f7dcba0b-7dbd-44df-8344-a5c36a3437b6] Running
	I0531 18:43:01.260374   33166 system_pods.go:61] "kube-vip-ha-080809" [12f7c64f-8e4d-4c2d-ae6f-13fc270d732b] Running
	I0531 18:43:01.260377   33166 system_pods.go:61] "kube-vip-ha-080809-m02" [7370bb36-f4fc-4885-a7f1-c7db2f6ae58d] Running
	I0531 18:43:01.260380   33166 system_pods.go:61] "storage-provisioner" [11521630-b23f-4574-9eda-8d0e5d0b7e10] Running
	I0531 18:43:01.260386   33166 system_pods.go:74] duration metric: took 166.430841ms to wait for pod list to return data ...
	I0531 18:43:01.260396   33166 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:43:01.450731   33166 request.go:629] Waited for 190.27249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:43:01.450817   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:43:01.450829   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.450839   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.450846   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.453536   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:43:01.453728   33166 default_sa.go:45] found service account: "default"
	I0531 18:43:01.453746   33166 default_sa.go:55] duration metric: took 193.343475ms for default service account to be created ...
	I0531 18:43:01.453755   33166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:43:01.651175   33166 request.go:629] Waited for 197.353935ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:43:01.651245   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:43:01.651250   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.651257   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.651261   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.658681   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:43:01.662823   33166 system_pods.go:86] 17 kube-system pods found
	I0531 18:43:01.662846   33166 system_pods.go:89] "coredns-7db6d8ff4d-cr2wg" [9f90f810-88d7-443c-8537-1eb0c65417be] Running
	I0531 18:43:01.662851   33166 system_pods.go:89] "coredns-7db6d8ff4d-kbkpk" [1de171b4-ac9e-42d1-be02-d585d0d01587] Running
	I0531 18:43:01.662855   33166 system_pods.go:89] "etcd-ha-080809" [56a31782-586f-4031-b3af-14b3079465c9] Running
	I0531 18:43:01.662859   33166 system_pods.go:89] "etcd-ha-080809-m02" [6b4fa1c2-be34-4fa6-b600-f5214c23b9de] Running
	I0531 18:43:01.662864   33166 system_pods.go:89] "kindnet-6jqp4" [97ee357b-3e8c-411b-bc52-f6efdce9b8ed] Running
	I0531 18:43:01.662868   33166 system_pods.go:89] "kindnet-fzzfx" [1f8aa910-a7fc-44cf-88ad-dbf14049ac1d] Running
	I0531 18:43:01.662872   33166 system_pods.go:89] "kube-apiserver-ha-080809" [d927093c-9ff3-4405-989e-32aaead2e89a] Running
	I0531 18:43:01.662876   33166 system_pods.go:89] "kube-apiserver-ha-080809-m02" [0f1c03da-2e74-469e-8e35-4bb51cdb69e2] Running
	I0531 18:43:01.662881   33166 system_pods.go:89] "kube-controller-manager-ha-080809" [f267f923-1e85-413b-be64-a45fe7a71acf] Running
	I0531 18:43:01.662887   33166 system_pods.go:89] "kube-controller-manager-ha-080809-m02" [10424961-670c-4578-8be1-48480d19c17b] Running
	I0531 18:43:01.662890   33166 system_pods.go:89] "kube-proxy-dmg2w" [50600abe-9fae-41f8-8ff2-05c8407163c0] Running
	I0531 18:43:01.662894   33166 system_pods.go:89] "kube-proxy-wxq4h" [f8c4e91b-a913-4baf-ad50-44d14b9d6792] Running
	I0531 18:43:01.662901   33166 system_pods.go:89] "kube-scheduler-ha-080809" [e739cfb0-a779-4262-8617-8f66e77c9b14] Running
	I0531 18:43:01.662904   33166 system_pods.go:89] "kube-scheduler-ha-080809-m02" [f7dcba0b-7dbd-44df-8344-a5c36a3437b6] Running
	I0531 18:43:01.662908   33166 system_pods.go:89] "kube-vip-ha-080809" [12f7c64f-8e4d-4c2d-ae6f-13fc270d732b] Running
	I0531 18:43:01.662912   33166 system_pods.go:89] "kube-vip-ha-080809-m02" [7370bb36-f4fc-4885-a7f1-c7db2f6ae58d] Running
	I0531 18:43:01.662915   33166 system_pods.go:89] "storage-provisioner" [11521630-b23f-4574-9eda-8d0e5d0b7e10] Running
	I0531 18:43:01.662921   33166 system_pods.go:126] duration metric: took 209.162213ms to wait for k8s-apps to be running ...
	I0531 18:43:01.662930   33166 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:43:01.662970   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:43:01.678467   33166 system_svc.go:56] duration metric: took 15.528151ms WaitForService to wait for kubelet
	I0531 18:43:01.678494   33166 kubeadm.go:576] duration metric: took 17.604968594s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:43:01.678516   33166 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:43:01.850855   33166 request.go:629] Waited for 172.276315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes
	I0531 18:43:01.850956   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes
	I0531 18:43:01.850972   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:01.850982   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:01.850990   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:01.854853   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:01.855615   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:43:01.855641   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:43:01.855655   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:43:01.855660   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:43:01.855666   33166 node_conditions.go:105] duration metric: took 177.144669ms to run NodePressure ...
	I0531 18:43:01.855678   33166 start.go:240] waiting for startup goroutines ...
	I0531 18:43:01.855702   33166 start.go:254] writing updated cluster config ...
	I0531 18:43:01.857736   33166 out.go:177] 
	I0531 18:43:01.859220   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:43:01.859366   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:43:01.860838   33166 out.go:177] * Starting "ha-080809-m03" control-plane node in "ha-080809" cluster
	I0531 18:43:01.862061   33166 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:43:01.862080   33166 cache.go:56] Caching tarball of preloaded images
	I0531 18:43:01.862156   33166 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:43:01.862166   33166 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:43:01.862240   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:43:01.862393   33166 start.go:360] acquireMachinesLock for ha-080809-m03: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:43:01.862439   33166 start.go:364] duration metric: took 23.982µs to acquireMachinesLock for "ha-080809-m03"
	I0531 18:43:01.862463   33166 start.go:93] Provisioning new machine with config: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:43:01.862552   33166 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0531 18:43:01.863925   33166 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0531 18:43:01.863991   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:43:01.864021   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:43:01.878423   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36751
	I0531 18:43:01.878800   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:43:01.879264   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:43:01.879287   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:43:01.879564   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:43:01.879751   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetMachineName
	I0531 18:43:01.879886   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:01.880025   33166 start.go:159] libmachine.API.Create for "ha-080809" (driver="kvm2")
	I0531 18:43:01.880055   33166 client.go:168] LocalClient.Create starting
	I0531 18:43:01.880086   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 18:43:01.880115   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:43:01.880130   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:43:01.880176   33166 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 18:43:01.880193   33166 main.go:141] libmachine: Decoding PEM data...
	I0531 18:43:01.880204   33166 main.go:141] libmachine: Parsing certificate...
	I0531 18:43:01.880220   33166 main.go:141] libmachine: Running pre-create checks...
	I0531 18:43:01.880227   33166 main.go:141] libmachine: (ha-080809-m03) Calling .PreCreateCheck
	I0531 18:43:01.880374   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetConfigRaw
	I0531 18:43:01.880666   33166 main.go:141] libmachine: Creating machine...
	I0531 18:43:01.880678   33166 main.go:141] libmachine: (ha-080809-m03) Calling .Create
	I0531 18:43:01.880772   33166 main.go:141] libmachine: (ha-080809-m03) Creating KVM machine...
	I0531 18:43:01.881991   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found existing default KVM network
	I0531 18:43:01.882124   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found existing private KVM network mk-ha-080809
	I0531 18:43:01.882234   33166 main.go:141] libmachine: (ha-080809-m03) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03 ...
	I0531 18:43:01.882261   33166 main.go:141] libmachine: (ha-080809-m03) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 18:43:01.882329   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:01.882228   33952 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:43:01.882387   33166 main.go:141] libmachine: (ha-080809-m03) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 18:43:02.090721   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:02.090579   33952 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa...
	I0531 18:43:02.230526   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:02.230407   33952 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/ha-080809-m03.rawdisk...
	I0531 18:43:02.230557   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Writing magic tar header
	I0531 18:43:02.230571   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Writing SSH key tar header
	I0531 18:43:02.230587   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:02.230512   33952 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03 ...
	I0531 18:43:02.230604   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03
	I0531 18:43:02.230643   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03 (perms=drwx------)
	I0531 18:43:02.230655   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 18:43:02.230664   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 18:43:02.230672   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 18:43:02.230687   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 18:43:02.230700   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:43:02.230711   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 18:43:02.230724   33166 main.go:141] libmachine: (ha-080809-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 18:43:02.230732   33166 main.go:141] libmachine: (ha-080809-m03) Creating domain...
	I0531 18:43:02.230740   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 18:43:02.230748   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 18:43:02.230763   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home/jenkins
	I0531 18:43:02.230776   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Checking permissions on dir: /home
	I0531 18:43:02.230791   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Skipping /home - not owner
	I0531 18:43:02.231711   33166 main.go:141] libmachine: (ha-080809-m03) define libvirt domain using xml: 
	I0531 18:43:02.231730   33166 main.go:141] libmachine: (ha-080809-m03) <domain type='kvm'>
	I0531 18:43:02.231740   33166 main.go:141] libmachine: (ha-080809-m03)   <name>ha-080809-m03</name>
	I0531 18:43:02.231749   33166 main.go:141] libmachine: (ha-080809-m03)   <memory unit='MiB'>2200</memory>
	I0531 18:43:02.231757   33166 main.go:141] libmachine: (ha-080809-m03)   <vcpu>2</vcpu>
	I0531 18:43:02.231771   33166 main.go:141] libmachine: (ha-080809-m03)   <features>
	I0531 18:43:02.231783   33166 main.go:141] libmachine: (ha-080809-m03)     <acpi/>
	I0531 18:43:02.231797   33166 main.go:141] libmachine: (ha-080809-m03)     <apic/>
	I0531 18:43:02.231809   33166 main.go:141] libmachine: (ha-080809-m03)     <pae/>
	I0531 18:43:02.231818   33166 main.go:141] libmachine: (ha-080809-m03)     
	I0531 18:43:02.231826   33166 main.go:141] libmachine: (ha-080809-m03)   </features>
	I0531 18:43:02.231837   33166 main.go:141] libmachine: (ha-080809-m03)   <cpu mode='host-passthrough'>
	I0531 18:43:02.231886   33166 main.go:141] libmachine: (ha-080809-m03)   
	I0531 18:43:02.231909   33166 main.go:141] libmachine: (ha-080809-m03)   </cpu>
	I0531 18:43:02.231927   33166 main.go:141] libmachine: (ha-080809-m03)   <os>
	I0531 18:43:02.231938   33166 main.go:141] libmachine: (ha-080809-m03)     <type>hvm</type>
	I0531 18:43:02.231952   33166 main.go:141] libmachine: (ha-080809-m03)     <boot dev='cdrom'/>
	I0531 18:43:02.231961   33166 main.go:141] libmachine: (ha-080809-m03)     <boot dev='hd'/>
	I0531 18:43:02.231975   33166 main.go:141] libmachine: (ha-080809-m03)     <bootmenu enable='no'/>
	I0531 18:43:02.231985   33166 main.go:141] libmachine: (ha-080809-m03)   </os>
	I0531 18:43:02.231996   33166 main.go:141] libmachine: (ha-080809-m03)   <devices>
	I0531 18:43:02.232012   33166 main.go:141] libmachine: (ha-080809-m03)     <disk type='file' device='cdrom'>
	I0531 18:43:02.232049   33166 main.go:141] libmachine: (ha-080809-m03)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/boot2docker.iso'/>
	I0531 18:43:02.232076   33166 main.go:141] libmachine: (ha-080809-m03)       <target dev='hdc' bus='scsi'/>
	I0531 18:43:02.232090   33166 main.go:141] libmachine: (ha-080809-m03)       <readonly/>
	I0531 18:43:02.232100   33166 main.go:141] libmachine: (ha-080809-m03)     </disk>
	I0531 18:43:02.232113   33166 main.go:141] libmachine: (ha-080809-m03)     <disk type='file' device='disk'>
	I0531 18:43:02.232125   33166 main.go:141] libmachine: (ha-080809-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 18:43:02.232140   33166 main.go:141] libmachine: (ha-080809-m03)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/ha-080809-m03.rawdisk'/>
	I0531 18:43:02.232156   33166 main.go:141] libmachine: (ha-080809-m03)       <target dev='hda' bus='virtio'/>
	I0531 18:43:02.232167   33166 main.go:141] libmachine: (ha-080809-m03)     </disk>
	I0531 18:43:02.232178   33166 main.go:141] libmachine: (ha-080809-m03)     <interface type='network'>
	I0531 18:43:02.232192   33166 main.go:141] libmachine: (ha-080809-m03)       <source network='mk-ha-080809'/>
	I0531 18:43:02.232201   33166 main.go:141] libmachine: (ha-080809-m03)       <model type='virtio'/>
	I0531 18:43:02.232213   33166 main.go:141] libmachine: (ha-080809-m03)     </interface>
	I0531 18:43:02.232228   33166 main.go:141] libmachine: (ha-080809-m03)     <interface type='network'>
	I0531 18:43:02.232241   33166 main.go:141] libmachine: (ha-080809-m03)       <source network='default'/>
	I0531 18:43:02.232259   33166 main.go:141] libmachine: (ha-080809-m03)       <model type='virtio'/>
	I0531 18:43:02.232271   33166 main.go:141] libmachine: (ha-080809-m03)     </interface>
	I0531 18:43:02.232278   33166 main.go:141] libmachine: (ha-080809-m03)     <serial type='pty'>
	I0531 18:43:02.232290   33166 main.go:141] libmachine: (ha-080809-m03)       <target port='0'/>
	I0531 18:43:02.232299   33166 main.go:141] libmachine: (ha-080809-m03)     </serial>
	I0531 18:43:02.232307   33166 main.go:141] libmachine: (ha-080809-m03)     <console type='pty'>
	I0531 18:43:02.232317   33166 main.go:141] libmachine: (ha-080809-m03)       <target type='serial' port='0'/>
	I0531 18:43:02.232336   33166 main.go:141] libmachine: (ha-080809-m03)     </console>
	I0531 18:43:02.232353   33166 main.go:141] libmachine: (ha-080809-m03)     <rng model='virtio'>
	I0531 18:43:02.232369   33166 main.go:141] libmachine: (ha-080809-m03)       <backend model='random'>/dev/random</backend>
	I0531 18:43:02.232379   33166 main.go:141] libmachine: (ha-080809-m03)     </rng>
	I0531 18:43:02.232393   33166 main.go:141] libmachine: (ha-080809-m03)     
	I0531 18:43:02.232403   33166 main.go:141] libmachine: (ha-080809-m03)     
	I0531 18:43:02.232414   33166 main.go:141] libmachine: (ha-080809-m03)   </devices>
	I0531 18:43:02.232425   33166 main.go:141] libmachine: (ha-080809-m03) </domain>
	I0531 18:43:02.232435   33166 main.go:141] libmachine: (ha-080809-m03) 
	I0531 18:43:02.238951   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:a0:f1:97 in network default
	I0531 18:43:02.239544   33166 main.go:141] libmachine: (ha-080809-m03) Ensuring networks are active...
	I0531 18:43:02.239563   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:02.240268   33166 main.go:141] libmachine: (ha-080809-m03) Ensuring network default is active
	I0531 18:43:02.240630   33166 main.go:141] libmachine: (ha-080809-m03) Ensuring network mk-ha-080809 is active
	I0531 18:43:02.240936   33166 main.go:141] libmachine: (ha-080809-m03) Getting domain xml...
	I0531 18:43:02.241629   33166 main.go:141] libmachine: (ha-080809-m03) Creating domain...
	I0531 18:43:03.460560   33166 main.go:141] libmachine: (ha-080809-m03) Waiting to get IP...
	I0531 18:43:03.461349   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:03.461672   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:03.461738   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:03.461679   33952 retry.go:31] will retry after 217.587912ms: waiting for machine to come up
	I0531 18:43:03.681278   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:03.681807   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:03.681846   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:03.681764   33952 retry.go:31] will retry after 249.833153ms: waiting for machine to come up
	I0531 18:43:03.933158   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:03.933630   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:03.933651   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:03.933583   33952 retry.go:31] will retry after 474.287257ms: waiting for machine to come up
	I0531 18:43:04.409317   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:04.409819   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:04.409850   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:04.409767   33952 retry.go:31] will retry after 443.148246ms: waiting for machine to come up
	I0531 18:43:04.854034   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:04.854523   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:04.854557   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:04.854474   33952 retry.go:31] will retry after 633.441738ms: waiting for machine to come up
	I0531 18:43:05.489926   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:05.490339   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:05.490364   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:05.490296   33952 retry.go:31] will retry after 851.493119ms: waiting for machine to come up
	I0531 18:43:06.343155   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:06.343539   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:06.343562   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:06.343495   33952 retry.go:31] will retry after 1.128436452s: waiting for machine to come up
	I0531 18:43:07.473066   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:07.473666   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:07.473695   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:07.473619   33952 retry.go:31] will retry after 1.149170287s: waiting for machine to come up
	I0531 18:43:08.624818   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:08.625318   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:08.625345   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:08.625276   33952 retry.go:31] will retry after 1.363964159s: waiting for machine to come up
	I0531 18:43:09.991757   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:09.992190   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:09.992236   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:09.992154   33952 retry.go:31] will retry after 1.823432389s: waiting for machine to come up
	I0531 18:43:11.817974   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:11.818405   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:11.818460   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:11.818380   33952 retry.go:31] will retry after 2.649232561s: waiting for machine to come up
	I0531 18:43:14.469976   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:14.470413   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:14.470440   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:14.470365   33952 retry.go:31] will retry after 2.29103927s: waiting for machine to come up
	I0531 18:43:16.764888   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:16.765395   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:16.765426   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:16.765340   33952 retry.go:31] will retry after 2.933558944s: waiting for machine to come up
	I0531 18:43:19.701947   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:19.702299   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find current IP address of domain ha-080809-m03 in network mk-ha-080809
	I0531 18:43:19.702331   33166 main.go:141] libmachine: (ha-080809-m03) DBG | I0531 18:43:19.702262   33952 retry.go:31] will retry after 3.848286184s: waiting for machine to come up
	I0531 18:43:23.551607   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:23.552073   33166 main.go:141] libmachine: (ha-080809-m03) Found IP for machine: 192.168.39.132
	I0531 18:43:23.552098   33166 main.go:141] libmachine: (ha-080809-m03) Reserving static IP address...
	I0531 18:43:23.552113   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has current primary IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:23.552455   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find host DHCP lease matching {name: "ha-080809-m03", mac: "52:54:00:9d:3b:ba", ip: "192.168.39.132"} in network mk-ha-080809
	I0531 18:43:23.621780   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Getting to WaitForSSH function...
	I0531 18:43:23.621811   33166 main.go:141] libmachine: (ha-080809-m03) Reserved static IP address: 192.168.39.132
	I0531 18:43:23.621838   33166 main.go:141] libmachine: (ha-080809-m03) Waiting for SSH to be available...
	I0531 18:43:23.624415   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:23.624835   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809
	I0531 18:43:23.624863   33166 main.go:141] libmachine: (ha-080809-m03) DBG | unable to find defined IP address of network mk-ha-080809 interface with MAC address 52:54:00:9d:3b:ba
	I0531 18:43:23.625089   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using SSH client type: external
	I0531 18:43:23.625118   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa (-rw-------)
	I0531 18:43:23.625145   33166 main.go:141] libmachine: (ha-080809-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:43:23.625159   33166 main.go:141] libmachine: (ha-080809-m03) DBG | About to run SSH command:
	I0531 18:43:23.625172   33166 main.go:141] libmachine: (ha-080809-m03) DBG | exit 0
	I0531 18:43:23.628673   33166 main.go:141] libmachine: (ha-080809-m03) DBG | SSH cmd err, output: exit status 255: 
	I0531 18:43:23.628689   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0531 18:43:23.628696   33166 main.go:141] libmachine: (ha-080809-m03) DBG | command : exit 0
	I0531 18:43:23.628700   33166 main.go:141] libmachine: (ha-080809-m03) DBG | err     : exit status 255
	I0531 18:43:23.628708   33166 main.go:141] libmachine: (ha-080809-m03) DBG | output  : 
	I0531 18:43:26.629183   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Getting to WaitForSSH function...
	I0531 18:43:26.631197   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.631568   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:26.631589   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.631731   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using SSH client type: external
	I0531 18:43:26.631753   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa (-rw-------)
	I0531 18:43:26.631782   33166 main.go:141] libmachine: (ha-080809-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 18:43:26.631796   33166 main.go:141] libmachine: (ha-080809-m03) DBG | About to run SSH command:
	I0531 18:43:26.631809   33166 main.go:141] libmachine: (ha-080809-m03) DBG | exit 0
	I0531 18:43:26.760999   33166 main.go:141] libmachine: (ha-080809-m03) DBG | SSH cmd err, output: <nil>: 
	I0531 18:43:26.761310   33166 main.go:141] libmachine: (ha-080809-m03) KVM machine creation complete!
	I0531 18:43:26.761644   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetConfigRaw
	I0531 18:43:26.762182   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:26.762499   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:26.762676   33166 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 18:43:26.762688   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:43:26.763935   33166 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 18:43:26.763950   33166 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 18:43:26.763956   33166 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 18:43:26.763965   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:26.766294   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.766690   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:26.766720   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.766886   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:26.767038   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:26.767184   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:26.767314   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:26.767499   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:26.767684   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:26.767696   33166 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 18:43:26.880273   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:43:26.880302   33166 main.go:141] libmachine: Detecting the provisioner...
	I0531 18:43:26.880312   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:26.882898   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.883199   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:26.883223   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:26.883439   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:26.883634   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:26.883740   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:26.883880   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:26.884072   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:26.884259   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:26.884273   33166 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 18:43:27.001677   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 18:43:27.001734   33166 main.go:141] libmachine: found compatible host: buildroot
	I0531 18:43:27.001740   33166 main.go:141] libmachine: Provisioning with buildroot...
	I0531 18:43:27.001748   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetMachineName
	I0531 18:43:27.001960   33166 buildroot.go:166] provisioning hostname "ha-080809-m03"
	I0531 18:43:27.001993   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetMachineName
	I0531 18:43:27.002200   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.004407   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.004737   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.004767   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.004834   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:27.005035   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.005200   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.005347   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:27.005499   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:27.005663   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:27.005677   33166 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809-m03 && echo "ha-080809-m03" | sudo tee /etc/hostname
	I0531 18:43:27.136866   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809-m03
	
	I0531 18:43:27.136888   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.139661   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.140075   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.140104   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.140228   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:27.140384   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.140541   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.140647   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:27.140826   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:27.141041   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:27.141059   33166 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:43:27.270861   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:43:27.270890   33166 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:43:27.270912   33166 buildroot.go:174] setting up certificates
	I0531 18:43:27.270925   33166 provision.go:84] configureAuth start
	I0531 18:43:27.270936   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetMachineName
	I0531 18:43:27.271182   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:43:27.273872   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.274225   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.274255   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.274398   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.276525   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.276845   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.276865   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.277027   33166 provision.go:143] copyHostCerts
	I0531 18:43:27.277054   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:43:27.277084   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:43:27.277091   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:43:27.277152   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:43:27.277219   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:43:27.277238   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:43:27.277244   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:43:27.277267   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:43:27.277321   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:43:27.277338   33166 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:43:27.277344   33166 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:43:27.277365   33166 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:43:27.277430   33166 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809-m03 san=[127.0.0.1 192.168.39.132 ha-080809-m03 localhost minikube]
	I0531 18:43:27.803552   33166 provision.go:177] copyRemoteCerts
	I0531 18:43:27.803612   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:43:27.803645   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.806606   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.807056   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.807095   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.807234   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:27.807456   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.807631   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:27.807759   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:43:27.899155   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:43:27.899249   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:43:27.923813   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:43:27.923885   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0531 18:43:27.949341   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:43:27.949412   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:43:27.973160   33166 provision.go:87] duration metric: took 702.223721ms to configureAuth
	I0531 18:43:27.973189   33166 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:43:27.973463   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:43:27.973543   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:27.975968   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.976363   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:27.976385   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:27.976575   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:27.976766   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.976933   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:27.977106   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:27.977267   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:27.977415   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:27.977430   33166 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:43:28.270929   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:43:28.270953   33166 main.go:141] libmachine: Checking connection to Docker...
	I0531 18:43:28.270968   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetURL
	I0531 18:43:28.272112   33166 main.go:141] libmachine: (ha-080809-m03) DBG | Using libvirt version 6000000
	I0531 18:43:28.274617   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.274982   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.275011   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.275176   33166 main.go:141] libmachine: Docker is up and running!
	I0531 18:43:28.275194   33166 main.go:141] libmachine: Reticulating splines...
	I0531 18:43:28.275200   33166 client.go:171] duration metric: took 26.395137472s to LocalClient.Create
	I0531 18:43:28.275226   33166 start.go:167] duration metric: took 26.395199871s to libmachine.API.Create "ha-080809"
	I0531 18:43:28.275239   33166 start.go:293] postStartSetup for "ha-080809-m03" (driver="kvm2")
	I0531 18:43:28.275258   33166 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:43:28.275281   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.275551   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:43:28.275579   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:28.277959   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.278331   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.278362   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.278483   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:28.278653   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.278813   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:28.278957   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:43:28.367657   33166 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:43:28.371933   33166 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:43:28.371959   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:43:28.372052   33166 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:43:28.372137   33166 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:43:28.372147   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:43:28.372224   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:43:28.381683   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:43:28.405559   33166 start.go:296] duration metric: took 130.30164ms for postStartSetup
	I0531 18:43:28.405609   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetConfigRaw
	I0531 18:43:28.406191   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:43:28.408785   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.409200   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.409226   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.409486   33166 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:43:28.409666   33166 start.go:128] duration metric: took 26.54710274s to createHost
	I0531 18:43:28.409697   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:28.411904   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.412287   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.412307   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.412441   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:28.412631   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.412785   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.412893   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:28.413051   33166 main.go:141] libmachine: Using SSH client type: native
	I0531 18:43:28.413211   33166 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.132 22 <nil> <nil>}
	I0531 18:43:28.413224   33166 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:43:28.529683   33166 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717181008.508532399
	
	I0531 18:43:28.529704   33166 fix.go:216] guest clock: 1717181008.508532399
	I0531 18:43:28.529712   33166 fix.go:229] Guest: 2024-05-31 18:43:28.508532399 +0000 UTC Remote: 2024-05-31 18:43:28.409677241 +0000 UTC m=+159.121951096 (delta=98.855158ms)
	I0531 18:43:28.529726   33166 fix.go:200] guest clock delta is within tolerance: 98.855158ms
	I0531 18:43:28.529731   33166 start.go:83] releasing machines lock for "ha-080809-m03", held for 26.667282362s
	I0531 18:43:28.529748   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.529953   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:43:28.532379   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.532669   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.532697   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.535221   33166 out.go:177] * Found network options:
	I0531 18:43:28.536708   33166 out.go:177]   - NO_PROXY=192.168.39.2,192.168.39.51
	W0531 18:43:28.537888   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	W0531 18:43:28.537906   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 18:43:28.537918   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.538415   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.538566   33166 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:43:28.538660   33166 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:43:28.538697   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	W0531 18:43:28.538702   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	W0531 18:43:28.538716   33166 proxy.go:119] fail to check proxy env: Error ip not in block
	I0531 18:43:28.538763   33166 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:43:28.538779   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:43:28.541399   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.541663   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.541752   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.541774   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.541926   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:28.542015   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:28.542051   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:28.542082   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.542179   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:43:28.542247   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:28.542333   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:43:28.542407   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:43:28.542441   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:43:28.542569   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:43:28.788519   33166 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:43:28.795611   33166 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:43:28.795679   33166 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:43:28.812413   33166 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 18:43:28.812433   33166 start.go:494] detecting cgroup driver to use...
	I0531 18:43:28.812490   33166 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:43:28.831628   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:43:28.847597   33166 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:43:28.847641   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:43:28.862540   33166 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:43:28.877654   33166 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:43:28.989967   33166 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:43:29.154561   33166 docker.go:233] disabling docker service ...
	I0531 18:43:29.154645   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:43:29.169912   33166 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:43:29.183740   33166 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:43:29.306538   33166 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:43:29.427978   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:43:29.451329   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:43:29.470342   33166 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:43:29.470408   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.481119   33166 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:43:29.481183   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.491741   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.501860   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.512075   33166 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:43:29.523302   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.534091   33166 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.552066   33166 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:43:29.563282   33166 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:43:29.573627   33166 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 18:43:29.573683   33166 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 18:43:29.587830   33166 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:43:29.597625   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:43:29.725913   33166 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:43:29.878013   33166 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:43:29.878089   33166 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:43:29.883390   33166 start.go:562] Will wait 60s for crictl version
	I0531 18:43:29.883444   33166 ssh_runner.go:195] Run: which crictl
	I0531 18:43:29.887451   33166 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:43:29.924056   33166 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:43:29.924159   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:43:29.952015   33166 ssh_runner.go:195] Run: crio --version
	I0531 18:43:29.988257   33166 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:43:29.989817   33166 out.go:177]   - env NO_PROXY=192.168.39.2
	I0531 18:43:29.991145   33166 out.go:177]   - env NO_PROXY=192.168.39.2,192.168.39.51
	I0531 18:43:29.992302   33166 main.go:141] libmachine: (ha-080809-m03) Calling .GetIP
	I0531 18:43:29.994728   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:29.995094   33166 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:43:29.995117   33166 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:43:29.995331   33166 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:43:29.999628   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:43:30.012246   33166 mustload.go:65] Loading cluster: ha-080809
	I0531 18:43:30.012508   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:43:30.012762   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:43:30.012798   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:43:30.029304   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0531 18:43:30.029680   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:43:30.030171   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:43:30.030195   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:43:30.030496   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:43:30.030662   33166 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:43:30.032121   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:43:30.032427   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:43:30.032461   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:43:30.046245   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43081
	I0531 18:43:30.046582   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:43:30.047016   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:43:30.047037   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:43:30.047364   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:43:30.047540   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:43:30.047695   33166 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.132
	I0531 18:43:30.047705   33166 certs.go:194] generating shared ca certs ...
	I0531 18:43:30.047717   33166 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:43:30.047815   33166 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:43:30.047850   33166 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:43:30.047883   33166 certs.go:256] generating profile certs ...
	I0531 18:43:30.047957   33166 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:43:30.047980   33166 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.325ce834
	I0531 18:43:30.047993   33166 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.325ce834 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.51 192.168.39.132 192.168.39.254]
	I0531 18:43:30.275589   33166 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.325ce834 ...
	I0531 18:43:30.275618   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.325ce834: {Name:mk51db84405b0da738612b01e79c91f0dc2fa702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:43:30.275827   33166 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.325ce834 ...
	I0531 18:43:30.275844   33166 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.325ce834: {Name:mkfa5bc2228ebaecff4f3c4178898d0f2265a665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:43:30.275930   33166 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.325ce834 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:43:30.276059   33166 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.325ce834 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:43:30.276175   33166 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:43:30.276189   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:43:30.276200   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:43:30.276211   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:43:30.276223   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:43:30.276237   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:43:30.276251   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:43:30.276263   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:43:30.276275   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:43:30.276322   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:43:30.276347   33166 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:43:30.276357   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:43:30.276378   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:43:30.276399   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:43:30.276421   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:43:30.276460   33166 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:43:30.276484   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:43:30.276498   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:43:30.276510   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:43:30.276539   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:43:30.279570   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:43:30.279975   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:43:30.280005   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:43:30.280164   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:43:30.280393   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:43:30.280590   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:43:30.280732   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:43:30.357227   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0531 18:43:30.362371   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0531 18:43:30.374798   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0531 18:43:30.380056   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0531 18:43:30.394771   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0531 18:43:30.399378   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0531 18:43:30.411373   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0531 18:43:30.417169   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0531 18:43:30.428731   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0531 18:43:30.433027   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0531 18:43:30.443960   33166 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0531 18:43:30.448284   33166 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0531 18:43:30.458960   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:43:30.485217   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:43:30.510052   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:43:30.536115   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:43:30.559594   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0531 18:43:30.586162   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:43:30.612565   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:43:30.638529   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:43:30.662797   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:43:30.686329   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:43:30.710793   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:43:30.734601   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0531 18:43:30.750657   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0531 18:43:30.767001   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0531 18:43:30.783327   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0531 18:43:30.800171   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0531 18:43:30.817581   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0531 18:43:30.834753   33166 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0531 18:43:30.852245   33166 ssh_runner.go:195] Run: openssl version
	I0531 18:43:30.858363   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:43:30.870534   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:43:30.876311   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:43:30.876368   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:43:30.883849   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:43:30.894355   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:43:30.904607   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:43:30.908893   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:43:30.908942   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:43:30.914948   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:43:30.925847   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:43:30.937046   33166 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:43:30.941694   33166 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:43:30.941756   33166 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:43:30.947363   33166 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:43:30.957691   33166 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:43:30.961758   33166 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 18:43:30.961818   33166 kubeadm.go:928] updating node {m03 192.168.39.132 8443 v1.30.1 crio true true} ...
	I0531 18:43:30.961905   33166 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:43:30.961932   33166 kube-vip.go:115] generating kube-vip config ...
	I0531 18:43:30.961971   33166 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:43:30.978903   33166 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:43:30.978975   33166 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:43:30.979047   33166 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:43:30.988579   33166 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.1': No such file or directory
	
	Initiating transfer...
	I0531 18:43:30.988634   33166 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.1
	I0531 18:43:30.998029   33166 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubelet.sha256
	I0531 18:43:30.998044   33166 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubectl.sha256
	I0531 18:43:30.998040   33166 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.1/bin/linux/amd64/kubeadm.sha256
	I0531 18:43:30.998065   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubectl -> /var/lib/minikube/binaries/v1.30.1/kubectl
	I0531 18:43:30.998068   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm -> /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0531 18:43:30.998077   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:43:30.998137   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl
	I0531 18:43:30.998161   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm
	I0531 18:43:31.014794   33166 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet -> /var/lib/minikube/binaries/v1.30.1/kubelet
	I0531 18:43:31.014830   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubeadm': No such file or directory
	I0531 18:43:31.014859   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubeadm --> /var/lib/minikube/binaries/v1.30.1/kubeadm (50249880 bytes)
	I0531 18:43:31.014869   33166 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet
	I0531 18:43:31.014900   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubectl': No such file or directory
	I0531 18:43:31.014925   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubectl --> /var/lib/minikube/binaries/v1.30.1/kubectl (51454104 bytes)
	I0531 18:43:31.041854   33166 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.1/kubelet': No such file or directory
	I0531 18:43:31.041899   33166 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.30.1/kubelet --> /var/lib/minikube/binaries/v1.30.1/kubelet (100100024 bytes)
	I0531 18:43:31.894463   33166 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0531 18:43:31.904252   33166 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0531 18:43:31.921223   33166 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:43:31.937881   33166 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 18:43:31.957239   33166 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:43:31.961508   33166 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 18:43:31.974977   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:43:32.114508   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:43:32.132751   33166 host.go:66] Checking if "ha-080809" exists ...
	I0531 18:43:32.133262   33166 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:43:32.133314   33166 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:43:32.150687   33166 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39319
	I0531 18:43:32.151082   33166 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:43:32.151663   33166 main.go:141] libmachine: Using API Version  1
	I0531 18:43:32.151692   33166 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:43:32.152035   33166 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:43:32.152241   33166 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:43:32.152381   33166 start.go:316] joinCluster: &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cluster
Name:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:43:32.152522   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0531 18:43:32.152541   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:43:32.155748   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:43:32.156115   33166 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:43:32.156143   33166 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:43:32.156274   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:43:32.156438   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:43:32.156598   33166 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:43:32.156735   33166 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:43:32.325915   33166 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:43:32.325974   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yfdj6g.iwwai9c61lgm6d3m --discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-080809-m03 --control-plane --apiserver-advertise-address=192.168.39.132 --apiserver-bind-port=8443"
	I0531 18:43:54.924265   33166 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token yfdj6g.iwwai9c61lgm6d3m --discovery-token-ca-cert-hash sha256:44296b41f6182a65e03732e587981dcddce5469baca3888aac32359038af9890 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-080809-m03 --control-plane --apiserver-advertise-address=192.168.39.132 --apiserver-bind-port=8443": (22.598262737s)
	I0531 18:43:54.924303   33166 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0531 18:43:55.508360   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-080809-m03 minikube.k8s.io/updated_at=2024_05_31T18_43_55_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404 minikube.k8s.io/name=ha-080809 minikube.k8s.io/primary=false
	I0531 18:43:55.639237   33166 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-080809-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0531 18:43:55.752944   33166 start.go:318] duration metric: took 23.600557093s to joinCluster
	I0531 18:43:55.753037   33166 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 18:43:55.754695   33166 out.go:177] * Verifying Kubernetes components...
	I0531 18:43:55.753287   33166 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:43:55.756466   33166 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:43:55.984373   33166 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:43:56.006734   33166 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:43:56.007083   33166 kapi.go:59] client config for ha-080809: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.crt", KeyFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key", CAFile:"/home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cfa660), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0531 18:43:56.007199   33166 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.2:8443
	I0531 18:43:56.007484   33166 node_ready.go:35] waiting up to 6m0s for node "ha-080809-m03" to be "Ready" ...
	I0531 18:43:56.007582   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:56.007593   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:56.007604   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:56.007625   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:56.010582   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:43:56.508562   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:56.508582   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:56.508590   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:56.508594   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:56.522674   33166 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0531 18:43:57.008457   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:57.008482   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:57.008490   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:57.008495   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:57.012216   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:57.507763   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:57.507785   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:57.507793   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:57.507797   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:57.512220   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:43:58.007967   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:58.007994   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:58.008005   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:58.008011   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:58.011388   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:58.011954   33166 node_ready.go:53] node "ha-080809-m03" has status "Ready":"False"
	I0531 18:43:58.508697   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:58.508716   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:58.508723   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:58.508728   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:58.512461   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:43:59.008338   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:59.008372   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:59.008384   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:59.008391   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:59.012804   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:43:59.508470   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:43:59.508492   33166 round_trippers.go:469] Request Headers:
	I0531 18:43:59.508504   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:43:59.508510   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:43:59.511548   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:00.008376   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:00.008399   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:00.008408   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:00.008414   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:00.012135   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:00.012892   33166 node_ready.go:53] node "ha-080809-m03" has status "Ready":"False"
	I0531 18:44:00.507925   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:00.508010   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:00.508026   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:00.508033   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:00.512373   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:01.007831   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:01.007862   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:01.007871   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:01.007876   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:01.011604   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:01.507899   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:01.507922   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:01.507932   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:01.507937   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:01.511924   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:02.008201   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:02.008224   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:02.008232   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:02.008236   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:02.011827   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:02.508330   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:02.508355   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:02.508365   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:02.508370   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:02.512758   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:02.513438   33166 node_ready.go:53] node "ha-080809-m03" has status "Ready":"False"
	I0531 18:44:03.008715   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:03.008746   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:03.008757   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:03.008761   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:03.014361   33166 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 18:44:03.507848   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:03.507872   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:03.507883   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:03.507891   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:03.510882   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.007877   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.007903   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.007914   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.007919   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.010907   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.011388   33166 node_ready.go:49] node "ha-080809-m03" has status "Ready":"True"
	I0531 18:44:04.011405   33166 node_ready.go:38] duration metric: took 8.003900807s for node "ha-080809-m03" to be "Ready" ...
	I0531 18:44:04.011413   33166 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:44:04.011466   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:04.011476   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.011483   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.011488   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.018104   33166 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0531 18:44:04.024724   33166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.024807   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr2wg
	I0531 18:44:04.024818   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.024829   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.024836   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.027751   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.028558   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:04.028572   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.028579   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.028582   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.030777   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.031279   33166 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:04.031295   33166 pod_ready.go:81] duration metric: took 6.552615ms for pod "coredns-7db6d8ff4d-cr2wg" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.031302   33166 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.031351   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-kbkpk
	I0531 18:44:04.031359   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.031366   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.031369   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.033333   33166 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 18:44:04.033912   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:04.033926   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.033933   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.033936   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.035988   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.036468   33166 pod_ready.go:92] pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:04.036486   33166 pod_ready.go:81] duration metric: took 5.179103ms for pod "coredns-7db6d8ff4d-kbkpk" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.036492   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.036532   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809
	I0531 18:44:04.036540   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.036547   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.036551   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.038564   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.039041   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:04.039055   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.039062   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.039065   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.041060   33166 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0531 18:44:04.041626   33166 pod_ready.go:92] pod "etcd-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:04.041646   33166 pod_ready.go:81] duration metric: took 5.147706ms for pod "etcd-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.041657   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.041716   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m02
	I0531 18:44:04.041726   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.041736   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.041744   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.044464   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.044945   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:04.044981   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.044992   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.045000   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.047071   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.047512   33166 pod_ready.go:92] pod "etcd-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:04.047529   33166 pod_ready.go:81] duration metric: took 5.864641ms for pod "etcd-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.047536   33166 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:04.208888   33166 request.go:629] Waited for 161.284399ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:04.208974   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:04.208984   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.208997   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.209001   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.212729   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:04.408701   33166 request.go:629] Waited for 195.146959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.408790   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.408803   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.408817   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.408827   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.411794   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:04.608981   33166 request.go:629] Waited for 60.228904ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:04.609055   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:04.609068   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.609079   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.609084   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.612604   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:04.808795   33166 request.go:629] Waited for 195.4035ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.808862   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:04.808870   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:04.808878   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:04.808886   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:04.811742   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:05.047857   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:05.047878   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:05.047884   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:05.047889   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:05.055030   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:44:05.208258   33166 request.go:629] Waited for 152.332527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:05.208321   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:05.208327   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:05.208334   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:05.208338   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:05.211967   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:05.547870   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:05.547892   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:05.547900   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:05.547904   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:05.551384   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:05.608434   33166 request.go:629] Waited for 56.233241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:05.608498   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:05.608506   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:05.608517   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:05.608547   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:05.612256   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.048352   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-080809-m03
	I0531 18:44:06.048376   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.048387   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.048395   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.051790   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.052643   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:06.052660   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.052670   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.052677   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.056064   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.056604   33166 pod_ready.go:92] pod "etcd-ha-080809-m03" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:06.056623   33166 pod_ready.go:81] duration metric: took 2.009077542s for pod "etcd-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.056646   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.207951   33166 request.go:629] Waited for 151.247089ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809
	I0531 18:44:06.208026   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809
	I0531 18:44:06.208034   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.208042   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.208049   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.211710   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.408063   33166 request.go:629] Waited for 195.289266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:06.408116   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:06.408122   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.408129   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.408133   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.411772   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.412556   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:06.412580   33166 pod_ready.go:81] duration metric: took 355.922336ms for pod "kube-apiserver-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.412593   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.608593   33166 request.go:629] Waited for 195.920387ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:44:06.608658   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m02
	I0531 18:44:06.608664   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.608671   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.608674   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.612075   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.808197   33166 request.go:629] Waited for 195.395118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:06.808281   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:06.808288   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:06.808301   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:06.808306   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:06.811647   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:06.812382   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:06.812398   33166 pod_ready.go:81] duration metric: took 399.797761ms for pod "kube-apiserver-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:06.812407   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:07.008917   33166 request.go:629] Waited for 196.45327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.009038   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.009049   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.009056   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.009061   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.012335   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:07.208347   33166 request.go:629] Waited for 195.262714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:07.208400   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:07.208407   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.208417   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.208424   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.211735   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:07.408789   33166 request.go:629] Waited for 96.249579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.408849   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.408856   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.408869   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.408876   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.413184   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:07.608021   33166 request.go:629] Waited for 193.205589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:07.608099   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:07.608110   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.608121   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.608125   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.611239   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:07.812841   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:07.812870   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:07.812881   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:07.812889   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:07.816905   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.007966   33166 request.go:629] Waited for 190.306729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.008026   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.008033   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.008047   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.008072   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.011609   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.312868   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:08.312889   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.312896   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.312900   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.316364   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.408645   33166 request.go:629] Waited for 91.280051ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.408716   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.408722   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.408733   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.408737   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.412254   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.813057   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:08.813080   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.813086   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.813090   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.816997   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:08.817867   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:08.817880   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:08.817890   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:08.817895   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:08.820539   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:08.821206   33166 pod_ready.go:102] pod "kube-apiserver-ha-080809-m03" in "kube-system" namespace has status "Ready":"False"
	I0531 18:44:09.313455   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:09.313477   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:09.313484   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:09.313489   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:09.318268   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:09.319391   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:09.319413   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:09.319424   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:09.319430   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:09.323063   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:09.813103   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:09.813132   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:09.813139   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:09.813144   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:09.817142   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:09.817858   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:09.817873   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:09.817882   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:09.817889   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:09.820621   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:10.313550   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:10.313575   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:10.313583   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:10.313585   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:10.317147   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:10.317961   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:10.317977   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:10.317984   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:10.317988   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:10.320632   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:10.812587   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:10.812611   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:10.812618   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:10.812623   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:10.816474   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:10.817285   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:10.817301   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:10.817308   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:10.817311   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:10.819792   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:11.312791   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809-m03
	I0531 18:44:11.312814   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.312821   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.312825   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.316277   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:11.316899   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:11.316917   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.316928   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.316934   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.319740   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:11.320286   33166 pod_ready.go:92] pod "kube-apiserver-ha-080809-m03" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:11.320304   33166 pod_ready.go:81] duration metric: took 4.507890838s for pod "kube-apiserver-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.320315   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.320370   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809
	I0531 18:44:11.320381   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.320391   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.320402   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.323073   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:11.323796   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:11.323811   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.323821   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.323828   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.325967   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:11.326488   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:11.326502   33166 pod_ready.go:81] duration metric: took 6.180855ms for pod "kube-controller-manager-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.326510   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.408760   33166 request.go:629] Waited for 82.194514ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:44:11.408841   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m02
	I0531 18:44:11.408852   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.408863   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.408870   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.412339   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:11.608509   33166 request.go:629] Waited for 195.392871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:11.608578   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:11.608583   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.608590   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.608593   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.612042   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:11.612779   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:11.612803   33166 pod_ready.go:81] duration metric: took 286.284988ms for pod "kube-controller-manager-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.612816   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:11.808147   33166 request.go:629] Waited for 195.275334ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m03
	I0531 18:44:11.808219   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-080809-m03
	I0531 18:44:11.808243   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:11.808253   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:11.808262   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:11.811885   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:12.007959   33166 request.go:629] Waited for 195.359718ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:12.008057   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:12.008068   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.008079   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.008087   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.011646   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:12.012283   33166 pod_ready.go:92] pod "kube-controller-manager-ha-080809-m03" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:12.012301   33166 pod_ready.go:81] duration metric: took 399.477853ms for pod "kube-controller-manager-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.012311   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dmg2w" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.208853   33166 request.go:629] Waited for 196.469258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmg2w
	I0531 18:44:12.208976   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmg2w
	I0531 18:44:12.208985   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.208992   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.208998   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.211972   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:12.408385   33166 request.go:629] Waited for 195.420767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:12.408461   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:12.408469   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.408478   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.408484   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.415578   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:44:12.416365   33166 pod_ready.go:92] pod "kube-proxy-dmg2w" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:12.416391   33166 pod_ready.go:81] duration metric: took 404.072958ms for pod "kube-proxy-dmg2w" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.416405   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nzlz5" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.608390   33166 request.go:629] Waited for 191.914789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nzlz5
	I0531 18:44:12.608474   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nzlz5
	I0531 18:44:12.608482   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.608493   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.608502   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.612047   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:12.808208   33166 request.go:629] Waited for 195.34955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:12.808274   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:12.808279   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:12.808286   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:12.808290   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:12.811346   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:12.811776   33166 pod_ready.go:92] pod "kube-proxy-nzlz5" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:12.811796   33166 pod_ready.go:81] duration metric: took 395.38334ms for pod "kube-proxy-nzlz5" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:12.811807   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wxq4h" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.008857   33166 request.go:629] Waited for 196.940044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxq4h
	I0531 18:44:13.008924   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wxq4h
	I0531 18:44:13.008935   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.008946   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.008970   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.014184   33166 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0531 18:44:13.208051   33166 request.go:629] Waited for 193.284555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:13.208136   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:13.208147   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.208159   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.208168   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.212111   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:13.212873   33166 pod_ready.go:92] pod "kube-proxy-wxq4h" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:13.212893   33166 pod_ready.go:81] duration metric: took 401.079094ms for pod "kube-proxy-wxq4h" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.212901   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.408169   33166 request.go:629] Waited for 195.199419ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809
	I0531 18:44:13.408268   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809
	I0531 18:44:13.408283   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.408291   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.408300   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.412365   33166 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0531 18:44:13.608864   33166 request.go:629] Waited for 195.801565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:13.608934   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809
	I0531 18:44:13.608944   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.608967   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.608999   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.612469   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:13.613071   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:13.613093   33166 pod_ready.go:81] duration metric: took 400.185239ms for pod "kube-scheduler-ha-080809" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.613106   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:13.807931   33166 request.go:629] Waited for 194.751128ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m02
	I0531 18:44:13.808013   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m02
	I0531 18:44:13.808021   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:13.808030   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:13.808038   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:13.811170   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:14.008050   33166 request.go:629] Waited for 196.214799ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:14.008128   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02
	I0531 18:44:14.008144   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.008154   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.008162   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.011698   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:14.012494   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:14.012515   33166 pod_ready.go:81] duration metric: took 399.402939ms for pod "kube-scheduler-ha-080809-m02" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:14.012525   33166 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:14.208543   33166 request.go:629] Waited for 195.940009ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m03
	I0531 18:44:14.208597   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-080809-m03
	I0531 18:44:14.208606   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.208614   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.208618   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.211615   33166 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0531 18:44:14.408286   33166 request.go:629] Waited for 195.991533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:14.408351   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes/ha-080809-m03
	I0531 18:44:14.408358   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.408368   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.408373   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.411642   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:14.412320   33166 pod_ready.go:92] pod "kube-scheduler-ha-080809-m03" in "kube-system" namespace has status "Ready":"True"
	I0531 18:44:14.412341   33166 pod_ready.go:81] duration metric: took 399.80943ms for pod "kube-scheduler-ha-080809-m03" in "kube-system" namespace to be "Ready" ...
	I0531 18:44:14.412354   33166 pod_ready.go:38] duration metric: took 10.400932137s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0531 18:44:14.412372   33166 api_server.go:52] waiting for apiserver process to appear ...
	I0531 18:44:14.412430   33166 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 18:44:14.430933   33166 api_server.go:72] duration metric: took 18.677854715s to wait for apiserver process to appear ...
	I0531 18:44:14.430969   33166 api_server.go:88] waiting for apiserver healthz status ...
	I0531 18:44:14.431007   33166 api_server.go:253] Checking apiserver healthz at https://192.168.39.2:8443/healthz ...
	I0531 18:44:14.437802   33166 api_server.go:279] https://192.168.39.2:8443/healthz returned 200:
	ok
	I0531 18:44:14.437884   33166 round_trippers.go:463] GET https://192.168.39.2:8443/version
	I0531 18:44:14.437896   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.437908   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.437919   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.438850   33166 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0531 18:44:14.438917   33166 api_server.go:141] control plane version: v1.30.1
	I0531 18:44:14.438934   33166 api_server.go:131] duration metric: took 7.957476ms to wait for apiserver health ...
	I0531 18:44:14.438945   33166 system_pods.go:43] waiting for kube-system pods to appear ...
	I0531 18:44:14.608375   33166 request.go:629] Waited for 169.342969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:14.608444   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:14.608452   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.608465   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.608473   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.616151   33166 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0531 18:44:14.622539   33166 system_pods.go:59] 24 kube-system pods found
	I0531 18:44:14.622568   33166 system_pods.go:61] "coredns-7db6d8ff4d-cr2wg" [9f90f810-88d7-443c-8537-1eb0c65417be] Running
	I0531 18:44:14.622575   33166 system_pods.go:61] "coredns-7db6d8ff4d-kbkpk" [1de171b4-ac9e-42d1-be02-d585d0d01587] Running
	I0531 18:44:14.622581   33166 system_pods.go:61] "etcd-ha-080809" [56a31782-586f-4031-b3af-14b3079465c9] Running
	I0531 18:44:14.622586   33166 system_pods.go:61] "etcd-ha-080809-m02" [6b4fa1c2-be34-4fa6-b600-f5214c23b9de] Running
	I0531 18:44:14.622590   33166 system_pods.go:61] "etcd-ha-080809-m03" [f9812a68-fb91-42d9-ba6c-761c909c8b1d] Running
	I0531 18:44:14.622595   33166 system_pods.go:61] "kindnet-6jqp4" [97ee357b-3e8c-411b-bc52-f6efdce9b8ed] Running
	I0531 18:44:14.622599   33166 system_pods.go:61] "kindnet-fzzfx" [1f8aa910-a7fc-44cf-88ad-dbf14049ac1d] Running
	I0531 18:44:14.622608   33166 system_pods.go:61] "kindnet-qzfbh" [30e56f89-1f34-4bc0-9728-29f158a091b4] Running
	I0531 18:44:14.622612   33166 system_pods.go:61] "kube-apiserver-ha-080809" [d927093c-9ff3-4405-989e-32aaead2e89a] Running
	I0531 18:44:14.622616   33166 system_pods.go:61] "kube-apiserver-ha-080809-m02" [0f1c03da-2e74-469e-8e35-4bb51cdb69e2] Running
	I0531 18:44:14.622622   33166 system_pods.go:61] "kube-apiserver-ha-080809-m03" [a8246e62-434e-4572-8943-8ab28a126794] Running
	I0531 18:44:14.622627   33166 system_pods.go:61] "kube-controller-manager-ha-080809" [f267f923-1e85-413b-be64-a45fe7a71acf] Running
	I0531 18:44:14.622634   33166 system_pods.go:61] "kube-controller-manager-ha-080809-m02" [10424961-670c-4578-8be1-48480d19c17b] Running
	I0531 18:44:14.622641   33166 system_pods.go:61] "kube-controller-manager-ha-080809-m03" [74632272-0a23-4df1-aaae-675ab9550e19] Running
	I0531 18:44:14.622649   33166 system_pods.go:61] "kube-proxy-dmg2w" [50600abe-9fae-41f8-8ff2-05c8407163c0] Running
	I0531 18:44:14.622653   33166 system_pods.go:61] "kube-proxy-nzlz5" [d59a1a71-3dc2-4cf6-9e01-b61fa8f26053] Running
	I0531 18:44:14.622656   33166 system_pods.go:61] "kube-proxy-wxq4h" [f8c4e91b-a913-4baf-ad50-44d14b9d6792] Running
	I0531 18:44:14.622659   33166 system_pods.go:61] "kube-scheduler-ha-080809" [e739cfb0-a779-4262-8617-8f66e77c9b14] Running
	I0531 18:44:14.622662   33166 system_pods.go:61] "kube-scheduler-ha-080809-m02" [f7dcba0b-7dbd-44df-8344-a5c36a3437b6] Running
	I0531 18:44:14.622665   33166 system_pods.go:61] "kube-scheduler-ha-080809-m03" [e9840dfe-2860-4c23-af6a-c45ebd36869e] Running
	I0531 18:44:14.622669   33166 system_pods.go:61] "kube-vip-ha-080809" [12f7c64f-8e4d-4c2d-ae6f-13fc270d732b] Running
	I0531 18:44:14.622672   33166 system_pods.go:61] "kube-vip-ha-080809-m02" [7370bb36-f4fc-4885-a7f1-c7db2f6ae58d] Running
	I0531 18:44:14.622674   33166 system_pods.go:61] "kube-vip-ha-080809-m03" [b152f6a1-3708-4f12-82f5-385eb59b3572] Running
	I0531 18:44:14.622677   33166 system_pods.go:61] "storage-provisioner" [11521630-b23f-4574-9eda-8d0e5d0b7e10] Running
	I0531 18:44:14.622683   33166 system_pods.go:74] duration metric: took 183.729019ms to wait for pod list to return data ...
	I0531 18:44:14.622692   33166 default_sa.go:34] waiting for default service account to be created ...
	I0531 18:44:14.807953   33166 request.go:629] Waited for 185.190851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:44:14.808014   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/default/serviceaccounts
	I0531 18:44:14.808019   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:14.808027   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:14.808035   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:14.811075   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:14.811211   33166 default_sa.go:45] found service account: "default"
	I0531 18:44:14.811232   33166 default_sa.go:55] duration metric: took 188.533375ms for default service account to be created ...
	I0531 18:44:14.811243   33166 system_pods.go:116] waiting for k8s-apps to be running ...
	I0531 18:44:15.008550   33166 request.go:629] Waited for 197.237981ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:15.008669   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/namespaces/kube-system/pods
	I0531 18:44:15.008681   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:15.008691   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:15.008702   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:15.015400   33166 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0531 18:44:15.022687   33166 system_pods.go:86] 24 kube-system pods found
	I0531 18:44:15.022713   33166 system_pods.go:89] "coredns-7db6d8ff4d-cr2wg" [9f90f810-88d7-443c-8537-1eb0c65417be] Running
	I0531 18:44:15.022719   33166 system_pods.go:89] "coredns-7db6d8ff4d-kbkpk" [1de171b4-ac9e-42d1-be02-d585d0d01587] Running
	I0531 18:44:15.022723   33166 system_pods.go:89] "etcd-ha-080809" [56a31782-586f-4031-b3af-14b3079465c9] Running
	I0531 18:44:15.022728   33166 system_pods.go:89] "etcd-ha-080809-m02" [6b4fa1c2-be34-4fa6-b600-f5214c23b9de] Running
	I0531 18:44:15.022732   33166 system_pods.go:89] "etcd-ha-080809-m03" [f9812a68-fb91-42d9-ba6c-761c909c8b1d] Running
	I0531 18:44:15.022736   33166 system_pods.go:89] "kindnet-6jqp4" [97ee357b-3e8c-411b-bc52-f6efdce9b8ed] Running
	I0531 18:44:15.022739   33166 system_pods.go:89] "kindnet-fzzfx" [1f8aa910-a7fc-44cf-88ad-dbf14049ac1d] Running
	I0531 18:44:15.022744   33166 system_pods.go:89] "kindnet-qzfbh" [30e56f89-1f34-4bc0-9728-29f158a091b4] Running
	I0531 18:44:15.022751   33166 system_pods.go:89] "kube-apiserver-ha-080809" [d927093c-9ff3-4405-989e-32aaead2e89a] Running
	I0531 18:44:15.022757   33166 system_pods.go:89] "kube-apiserver-ha-080809-m02" [0f1c03da-2e74-469e-8e35-4bb51cdb69e2] Running
	I0531 18:44:15.022763   33166 system_pods.go:89] "kube-apiserver-ha-080809-m03" [a8246e62-434e-4572-8943-8ab28a126794] Running
	I0531 18:44:15.022770   33166 system_pods.go:89] "kube-controller-manager-ha-080809" [f267f923-1e85-413b-be64-a45fe7a71acf] Running
	I0531 18:44:15.022780   33166 system_pods.go:89] "kube-controller-manager-ha-080809-m02" [10424961-670c-4578-8be1-48480d19c17b] Running
	I0531 18:44:15.022790   33166 system_pods.go:89] "kube-controller-manager-ha-080809-m03" [74632272-0a23-4df1-aaae-675ab9550e19] Running
	I0531 18:44:15.022797   33166 system_pods.go:89] "kube-proxy-dmg2w" [50600abe-9fae-41f8-8ff2-05c8407163c0] Running
	I0531 18:44:15.022801   33166 system_pods.go:89] "kube-proxy-nzlz5" [d59a1a71-3dc2-4cf6-9e01-b61fa8f26053] Running
	I0531 18:44:15.022807   33166 system_pods.go:89] "kube-proxy-wxq4h" [f8c4e91b-a913-4baf-ad50-44d14b9d6792] Running
	I0531 18:44:15.022810   33166 system_pods.go:89] "kube-scheduler-ha-080809" [e739cfb0-a779-4262-8617-8f66e77c9b14] Running
	I0531 18:44:15.022816   33166 system_pods.go:89] "kube-scheduler-ha-080809-m02" [f7dcba0b-7dbd-44df-8344-a5c36a3437b6] Running
	I0531 18:44:15.022821   33166 system_pods.go:89] "kube-scheduler-ha-080809-m03" [e9840dfe-2860-4c23-af6a-c45ebd36869e] Running
	I0531 18:44:15.022827   33166 system_pods.go:89] "kube-vip-ha-080809" [12f7c64f-8e4d-4c2d-ae6f-13fc270d732b] Running
	I0531 18:44:15.022830   33166 system_pods.go:89] "kube-vip-ha-080809-m02" [7370bb36-f4fc-4885-a7f1-c7db2f6ae58d] Running
	I0531 18:44:15.022836   33166 system_pods.go:89] "kube-vip-ha-080809-m03" [b152f6a1-3708-4f12-82f5-385eb59b3572] Running
	I0531 18:44:15.022839   33166 system_pods.go:89] "storage-provisioner" [11521630-b23f-4574-9eda-8d0e5d0b7e10] Running
	I0531 18:44:15.022851   33166 system_pods.go:126] duration metric: took 211.598428ms to wait for k8s-apps to be running ...
	I0531 18:44:15.022864   33166 system_svc.go:44] waiting for kubelet service to be running ....
	I0531 18:44:15.022915   33166 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 18:44:15.038466   33166 system_svc.go:56] duration metric: took 15.595196ms WaitForService to wait for kubelet
	I0531 18:44:15.038491   33166 kubeadm.go:576] duration metric: took 19.285419331s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:44:15.038508   33166 node_conditions.go:102] verifying NodePressure condition ...
	I0531 18:44:15.208050   33166 request.go:629] Waited for 169.475176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.2:8443/api/v1/nodes
	I0531 18:44:15.208101   33166 round_trippers.go:463] GET https://192.168.39.2:8443/api/v1/nodes
	I0531 18:44:15.208113   33166 round_trippers.go:469] Request Headers:
	I0531 18:44:15.208124   33166 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0531 18:44:15.208132   33166 round_trippers.go:473]     Accept: application/json, */*
	I0531 18:44:15.211884   33166 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0531 18:44:15.212898   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:44:15.212921   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:44:15.212932   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:44:15.212937   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:44:15.212941   33166 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0531 18:44:15.212944   33166 node_conditions.go:123] node cpu capacity is 2
	I0531 18:44:15.212948   33166 node_conditions.go:105] duration metric: took 174.435487ms to run NodePressure ...
	I0531 18:44:15.212972   33166 start.go:240] waiting for startup goroutines ...
	I0531 18:44:15.213001   33166 start.go:254] writing updated cluster config ...
	I0531 18:44:15.213301   33166 ssh_runner.go:195] Run: rm -f paused
	I0531 18:44:15.263986   33166 start.go:600] kubectl: 1.30.1, cluster: 1.30.1 (minor skew: 0)
	I0531 18:44:15.265792   33166 out.go:177] * Done! kubectl is now configured to use "ha-080809" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.790712867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717181316790686077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145840,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ea2a467-f6ad-4758-bd1e-63d4567e76ad name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.792108758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c8d09a7f-4d26-40e8-953c-81a8c7b45155 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.792184563Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c8d09a7f-4d26-40e8-953c-81a8c7b45155 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.792684909Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492,PodSandboxId:ddcee0fba28183cc0292a1265bba537d1781896b67f0fca5dedbf6abc94eef26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913363441596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53f4842200c266d40d09c15c6a7a4f03e9f89d6e5275c2a127f5f10ea5de2066,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717180913290411484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679,PodSandboxId:cc0c09dd14d70cb4b5097468dcdc6c222a3d5c3c19edf24d4375b915dc7f63df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913311381965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-8
8d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:C
ONTAINER_RUNNING,CreatedAt:1717180911840390648,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:17171809
08041443440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866bb4e7c05a49b19e6b793aae277fe300ac3aca7345ea02a4210e729a1572f9,PodSandboxId:74e5f430b53600e253bc5f09df245672c907d7addc770aee32876f7b023777c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717180890
170934119,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23456d557e5f95caef8eb617de3db695,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e,PodSandboxId:4b398e7c07ccc08e352ea0fefe48eabd28b3468925afef6ee1e7d56d4910600b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717180887203523479,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801,PodSandboxId:a91f5f7e726356876499c8c2d9ffac1b260c7956211cb5ca371e9e7ba75fa308,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717180887133191821,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717180887210303049,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a,PodSandboxId:0f4f1de81138d5effe6b3d5dd6bfbdfd2fb3b1a8587606a8431c04bf53f8b8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717180887090232010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.n
ame: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c8d09a7f-4d26-40e8-953c-81a8c7b45155 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.839513883Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f88685a2-a37c-442d-8aa7-b8af4bcd2c49 name=/runtime.v1.RuntimeService/Version
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.839592223Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f88685a2-a37c-442d-8aa7-b8af4bcd2c49 name=/runtime.v1.RuntimeService/Version
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.841232123Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=780a9c76-8cf9-4fa4-bb30-63cffa194200 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.844289526Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717181316844262414,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145840,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=780a9c76-8cf9-4fa4-bb30-63cffa194200 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.844741250Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eacd9612-76b8-42cb-8b47-102bb0ef9c16 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.844790773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eacd9612-76b8-42cb-8b47-102bb0ef9c16 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.845090095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492,PodSandboxId:ddcee0fba28183cc0292a1265bba537d1781896b67f0fca5dedbf6abc94eef26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913363441596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53f4842200c266d40d09c15c6a7a4f03e9f89d6e5275c2a127f5f10ea5de2066,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717180913290411484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679,PodSandboxId:cc0c09dd14d70cb4b5097468dcdc6c222a3d5c3c19edf24d4375b915dc7f63df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913311381965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-8
8d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:C
ONTAINER_RUNNING,CreatedAt:1717180911840390648,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:17171809
08041443440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866bb4e7c05a49b19e6b793aae277fe300ac3aca7345ea02a4210e729a1572f9,PodSandboxId:74e5f430b53600e253bc5f09df245672c907d7addc770aee32876f7b023777c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717180890
170934119,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23456d557e5f95caef8eb617de3db695,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e,PodSandboxId:4b398e7c07ccc08e352ea0fefe48eabd28b3468925afef6ee1e7d56d4910600b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717180887203523479,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801,PodSandboxId:a91f5f7e726356876499c8c2d9ffac1b260c7956211cb5ca371e9e7ba75fa308,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717180887133191821,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717180887210303049,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a,PodSandboxId:0f4f1de81138d5effe6b3d5dd6bfbdfd2fb3b1a8587606a8431c04bf53f8b8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717180887090232010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.n
ame: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eacd9612-76b8-42cb-8b47-102bb0ef9c16 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.887704068Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=39cb7b5c-bc5a-45a8-8f98-3955fdc00179 name=/runtime.v1.RuntimeService/Version
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.887770709Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=39cb7b5c-bc5a-45a8-8f98-3955fdc00179 name=/runtime.v1.RuntimeService/Version
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.889135094Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d8c700a1-57ed-449a-9885-b3c92e95508f name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.889550553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717181316889530273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145840,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d8c700a1-57ed-449a-9885-b3c92e95508f name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.890056360Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c9c86d7-98a4-46a6-833b-d5e620567aa7 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.890110951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c9c86d7-98a4-46a6-833b-d5e620567aa7 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.890525164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492,PodSandboxId:ddcee0fba28183cc0292a1265bba537d1781896b67f0fca5dedbf6abc94eef26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913363441596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53f4842200c266d40d09c15c6a7a4f03e9f89d6e5275c2a127f5f10ea5de2066,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717180913290411484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679,PodSandboxId:cc0c09dd14d70cb4b5097468dcdc6c222a3d5c3c19edf24d4375b915dc7f63df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913311381965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-8
8d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:C
ONTAINER_RUNNING,CreatedAt:1717180911840390648,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:17171809
08041443440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866bb4e7c05a49b19e6b793aae277fe300ac3aca7345ea02a4210e729a1572f9,PodSandboxId:74e5f430b53600e253bc5f09df245672c907d7addc770aee32876f7b023777c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717180890
170934119,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23456d557e5f95caef8eb617de3db695,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e,PodSandboxId:4b398e7c07ccc08e352ea0fefe48eabd28b3468925afef6ee1e7d56d4910600b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717180887203523479,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801,PodSandboxId:a91f5f7e726356876499c8c2d9ffac1b260c7956211cb5ca371e9e7ba75fa308,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717180887133191821,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717180887210303049,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a,PodSandboxId:0f4f1de81138d5effe6b3d5dd6bfbdfd2fb3b1a8587606a8431c04bf53f8b8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717180887090232010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.n
ame: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c9c86d7-98a4-46a6-833b-d5e620567aa7 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.927721021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a3671f7-2eb8-4407-a2e0-0a2f80f555bb name=/runtime.v1.RuntimeService/Version
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.927793599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a3671f7-2eb8-4407-a2e0-0a2f80f555bb name=/runtime.v1.RuntimeService/Version
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.929069142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a0a353ca-b5d0-4a6d-a917-62206b70bedd name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.930427360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717181316930398195,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:145840,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0a353ca-b5d0-4a6d-a917-62206b70bedd name=/runtime.v1.ImageService/ImageFsInfo
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.932255746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef0840af-57c3-42a8-a3c8-0fcd325a94f9 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.932322642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef0840af-57c3-42a8-a3c8-0fcd325a94f9 name=/runtime.v1.RuntimeService/ListContainers
	May 31 18:48:36 ha-080809 crio[683]: time="2024-05-31 18:48:36.932805856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492,PodSandboxId:ddcee0fba28183cc0292a1265bba537d1781896b67f0fca5dedbf6abc94eef26,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913363441596,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\
"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53f4842200c266d40d09c15c6a7a4f03e9f89d6e5275c2a127f5f10ea5de2066,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717180913290411484,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679,PodSandboxId:cc0c09dd14d70cb4b5097468dcdc6c222a3d5c3c19edf24d4375b915dc7f63df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717180913311381965,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-8
8d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:C
ONTAINER_RUNNING,CreatedAt:1717180911840390648,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:17171809
08041443440,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:866bb4e7c05a49b19e6b793aae277fe300ac3aca7345ea02a4210e729a1572f9,PodSandboxId:74e5f430b53600e253bc5f09df245672c907d7addc770aee32876f7b023777c9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717180890
170934119,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 23456d557e5f95caef8eb617de3db695,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e,PodSandboxId:4b398e7c07ccc08e352ea0fefe48eabd28b3468925afef6ee1e7d56d4910600b,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717180887203523479,Labels:map[string]string{io.kuber
netes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801,PodSandboxId:a91f5f7e726356876499c8c2d9ffac1b260c7956211cb5ca371e9e7ba75fa308,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717180887133191821,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717180887210303049,Labels:map[string]string{io.kubernetes.container.name: kube-schedule
r,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a,PodSandboxId:0f4f1de81138d5effe6b3d5dd6bfbdfd2fb3b1a8587606a8431c04bf53f8b8f8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717180887090232010,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.n
ame: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef0840af-57c3-42a8-a3c8-0fcd325a94f9 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2c9526bf60d95       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     6 minutes ago       Running             coredns                   0                   ddcee0fba2818       coredns-7db6d8ff4d-kbkpk
	54e4715f49646       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                     6 minutes ago       Running             coredns                   0                   cc0c09dd14d70       coredns-7db6d8ff4d-cr2wg
	53f4842200c26       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     6 minutes ago       Running             storage-provisioner       0                   8bd72a9b0e002       storage-provisioner
	eda951a1aef5d       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266   6 minutes ago       Running             kindnet-cni               0                   2bdbf3a323266       kindnet-fzzfx
	2442a39b94718       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                     6 minutes ago       Running             kube-proxy                0                   d23870754e4e8       kube-proxy-dmg2w
	866bb4e7c05a4       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f    7 minutes ago       Running             kube-vip                  0                   74e5f430b5360       kube-vip-ha-080809
	817682d6f214f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                     7 minutes ago       Running             kube-scheduler            0                   a1254b0478464       kube-scheduler-ha-080809
	588c0be817c52       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                     7 minutes ago       Running             etcd                      0                   4b398e7c07ccc       etcd-ha-080809
	0d31a9e756e1e       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                     7 minutes ago       Running             kube-controller-manager   0                   a91f5f7e72635       kube-controller-manager-ha-080809
	0116f67d37a6a       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                     7 minutes ago       Running             kube-apiserver            0                   0f4f1de81138d       kube-apiserver-ha-080809
	
	
	==> coredns [2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492] <==
	[INFO] 10.244.2.2:51434 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168245s
	[INFO] 10.244.2.2:51800 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001241702s
	[INFO] 10.244.2.2:41221 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000093978s
	[INFO] 10.244.2.2:54690 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080529s
	[INFO] 10.244.1.2:35391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115585s
	[INFO] 10.244.1.2:46555 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001915248s
	[INFO] 10.244.1.2:57394 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151681s
	[INFO] 10.244.1.2:36090 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000071582s
	[INFO] 10.244.1.3:39302 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112531s
	[INFO] 10.244.1.3:59251 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146082s
	[INFO] 10.244.1.3:44994 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186325s
	[INFO] 10.244.2.2:59669 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101916s
	[INFO] 10.244.2.2:48334 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107498s
	[INFO] 10.244.1.2:38765 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185939s
	[INFO] 10.244.1.2:60956 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000128962s
	[INFO] 10.244.1.2:52932 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087027s
	[INFO] 10.244.1.2:39378 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092276s
	[INFO] 10.244.1.3:44446 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000339346s
	[INFO] 10.244.2.2:46765 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109543s
	[INFO] 10.244.2.2:41182 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131261s
	[INFO] 10.244.2.2:33464 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152439s
	[INFO] 10.244.1.2:42271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104723s
	[INFO] 10.244.1.2:46690 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000066235s
	[INFO] 10.244.1.2:42389 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013689s
	[INFO] 10.244.1.2:38173 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00011827s
	
	
	==> coredns [54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679] <==
	[INFO] 10.244.1.3:33000 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001660325s
	[INFO] 10.244.1.3:47322 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.005658116s
	[INFO] 10.244.1.3:51140 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.014792944s
	[INFO] 10.244.2.2:55470 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001442856s
	[INFO] 10.244.2.2:34606 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000083801s
	[INFO] 10.244.1.2:44192 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000087527s
	[INFO] 10.244.1.2:58109 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000112644s
	[INFO] 10.244.1.3:48252 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001588s
	[INFO] 10.244.1.3:54056 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269568s
	[INFO] 10.244.1.3:46632 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000285388s
	[INFO] 10.244.1.3:53120 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.013333201s
	[INFO] 10.244.1.3:41820 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000181168s
	[INFO] 10.244.2.2:45232 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101776s
	[INFO] 10.244.2.2:33325 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079862s
	[INFO] 10.244.1.2:50899 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130749s
	[INFO] 10.244.1.2:41406 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001416454s
	[INFO] 10.244.1.2:60468 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129757s
	[INFO] 10.244.1.2:54879 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126191s
	[INFO] 10.244.1.3:44719 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113779s
	[INFO] 10.244.2.2:45824 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000196144s
	[INFO] 10.244.2.2:49963 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000095304s
	[INFO] 10.244.1.3:40266 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00018931s
	[INFO] 10.244.1.3:43033 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000251756s
	[INFO] 10.244.1.3:51239 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000124133s
	[INFO] 10.244.2.2:45507 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.001299123s
	
	
	==> describe nodes <==
	Name:               ha-080809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_31T18_41_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:41:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:47:10 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:47:10 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:47:10 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:47:10 +0000   Fri, 31 May 2024 18:41:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-080809
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eb1bd8d3c42441893aa8670274ffb1e
	  System UUID:                0eb1bd8d-3c42-4418-93aa-8670274ffb1e
	  Boot ID:                    a71cdc12-b2dc-410e-bd89-30705a38ac97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-cr2wg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m50s
	  kube-system                 coredns-7db6d8ff4d-kbkpk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m50s
	  kube-system                 etcd-ha-080809                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m5s
	  kube-system                 kindnet-fzzfx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m50s
	  kube-system                 kube-apiserver-ha-080809             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-controller-manager-ha-080809    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m5s
	  kube-system                 kube-proxy-dmg2w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m50s
	  kube-system                 kube-scheduler-ha-080809             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m4s
	  kube-system                 kube-vip-ha-080809                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m6s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m48s  kube-proxy       
	  Normal  Starting                 7m4s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m4s   kubelet          Node ha-080809 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m4s   kubelet          Node ha-080809 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m4s   kubelet          Node ha-080809 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m51s  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal  NodeReady                6m45s  kubelet          Node ha-080809 status is now: NodeReady
	  Normal  RegisteredNode           5m38s  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal  RegisteredNode           4m28s  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	
	
	Name:               ha-080809-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_42_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:42:40 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:45:14 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 18:44:43 +0000   Fri, 31 May 2024 18:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 18:44:43 +0000   Fri, 31 May 2024 18:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 18:44:43 +0000   Fri, 31 May 2024 18:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 18:44:43 +0000   Fri, 31 May 2024 18:45:54 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-080809-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 26fe5dde81c649a890a62a46837233ef
	  System UUID:                26fe5dde-81c6-49a8-90a6-2a46837233ef
	  Boot ID:                    ec159d72-949a-4946-858c-4fe0183a91d2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dwnsz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  default                     busybox-fc5497c4f-wqwrw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-ha-080809-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 kindnet-6jqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m57s
	  kube-system                 kube-apiserver-ha-080809-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-controller-manager-ha-080809-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-proxy-wxq4h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-scheduler-ha-080809-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-vip-ha-080809-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m56s                  node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           5m38s                  node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  NodeNotReady             2m43s                  node-controller  Node ha-080809-m02 status is now: NodeNotReady
	
	
	Name:               ha-080809-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_43_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:43:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:43:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:43:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:43:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:44:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-080809-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 547b87f87b134b9798b7d615ab502227
	  System UUID:                547b87f8-7b13-4b97-98b7-d615ab502227
	  Boot ID:                    8e35dd93-e6c7-4f99-9f16-d89c3fb275fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qtbxj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 etcd-ha-080809-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m43s
	  kube-system                 kindnet-qzfbh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m45s
	  kube-system                 kube-apiserver-ha-080809-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-controller-manager-ha-080809-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m42s
	  kube-system                 kube-proxy-nzlz5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-scheduler-ha-080809-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-vip-ha-080809-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m40s                  kube-proxy       
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s (x9 over 4m45s)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s (x7 over 4m45s)  kubelet          Node ha-080809-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s (x7 over 4m45s)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m43s                  node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           4m41s                  node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           4m28s                  node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	
	
	Name:               ha-080809-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_44_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:44:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:44:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:44:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:44:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:45:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-080809-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4babfa9b2c834bb583245811edaf0c5d
	  System UUID:                4babfa9b-2c83-4bb5-8324-5811edaf0c5d
	  Boot ID:                    b6f38232-474d-40b0-b4c0-a2b12b8f03ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gtssc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m44s
	  kube-system                 kube-proxy-mzhqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m37s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m44s (x2 over 3m44s)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x2 over 3m44s)  kubelet          Node ha-080809-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x2 over 3m44s)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           3m41s                  node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  NodeReady                3m33s                  kubelet          Node ha-080809-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[May31 18:40] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051744] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041405] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.498015] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.407254] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[May31 18:41] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.297552] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.055284] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057048] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.190072] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.115739] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.263739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.190448] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.324345] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.063717] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.978929] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075713] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.762573] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.211184] kauditd_printk_skb: 38 callbacks suppressed
	[May31 18:42] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e] <==
	{"level":"warn","ts":"2024-05-31T18:48:37.06117Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.16064Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.193019Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.199318Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.203366Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.219349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.225932Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.232129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.236747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.240487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.252469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.261001Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.261143Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.267164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.27137Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.274184Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.281789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.308175Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.322129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.331269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.339072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.355454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.361447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.365123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T18:48:37.371599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:48:37 up 7 min,  0 users,  load average: 0.38, 0.31, 0.17
	Linux ha-080809 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [eda951a1aef5d2f0f3c373ab2a6a7413edd7dd9225ab363a7630f566bed4f022] <==
	I0531 18:48:03.091031       1 main.go:250] Node ha-080809-m04 has CIDR [10.244.3.0/24] 
	I0531 18:48:13.105472       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0531 18:48:13.105567       1 main.go:227] handling current node
	I0531 18:48:13.105597       1 main.go:223] Handling node with IPs: map[192.168.39.51:{}]
	I0531 18:48:13.105617       1 main.go:250] Node ha-080809-m02 has CIDR [10.244.1.0/24] 
	I0531 18:48:13.105754       1 main.go:223] Handling node with IPs: map[192.168.39.132:{}]
	I0531 18:48:13.105775       1 main.go:250] Node ha-080809-m03 has CIDR [10.244.2.0/24] 
	I0531 18:48:13.105831       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0531 18:48:13.105916       1 main.go:250] Node ha-080809-m04 has CIDR [10.244.3.0/24] 
	I0531 18:48:23.119663       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0531 18:48:23.120094       1 main.go:227] handling current node
	I0531 18:48:23.120131       1 main.go:223] Handling node with IPs: map[192.168.39.51:{}]
	I0531 18:48:23.120193       1 main.go:250] Node ha-080809-m02 has CIDR [10.244.1.0/24] 
	I0531 18:48:23.120321       1 main.go:223] Handling node with IPs: map[192.168.39.132:{}]
	I0531 18:48:23.120346       1 main.go:250] Node ha-080809-m03 has CIDR [10.244.2.0/24] 
	I0531 18:48:23.120400       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0531 18:48:23.120422       1 main.go:250] Node ha-080809-m04 has CIDR [10.244.3.0/24] 
	I0531 18:48:33.135972       1 main.go:223] Handling node with IPs: map[192.168.39.2:{}]
	I0531 18:48:33.136078       1 main.go:227] handling current node
	I0531 18:48:33.136113       1 main.go:223] Handling node with IPs: map[192.168.39.51:{}]
	I0531 18:48:33.136137       1 main.go:250] Node ha-080809-m02 has CIDR [10.244.1.0/24] 
	I0531 18:48:33.136337       1 main.go:223] Handling node with IPs: map[192.168.39.132:{}]
	I0531 18:48:33.136397       1 main.go:250] Node ha-080809-m03 has CIDR [10.244.2.0/24] 
	I0531 18:48:33.136495       1 main.go:223] Handling node with IPs: map[192.168.39.40:{}]
	I0531 18:48:33.136522       1 main.go:250] Node ha-080809-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0116f67d37a6aa0019a8c39e4135364e508f877bc985bf5111a2029c1d14e42a] <==
	I0531 18:41:32.167967       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0531 18:41:32.177477       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.2]
	I0531 18:41:32.178289       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 18:41:32.182627       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 18:41:32.544463       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0531 18:41:33.450206       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0531 18:41:33.468495       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0531 18:41:33.482739       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0531 18:41:47.322720       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0531 18:41:47.475006       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0531 18:44:20.941354       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41450: use of closed network connection
	E0531 18:44:21.128187       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41470: use of closed network connection
	E0531 18:44:21.298026       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41490: use of closed network connection
	E0531 18:44:21.500538       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41508: use of closed network connection
	E0531 18:44:21.680064       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41538: use of closed network connection
	E0531 18:44:21.891464       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41560: use of closed network connection
	E0531 18:44:22.066902       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41578: use of closed network connection
	E0531 18:44:22.251410       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41590: use of closed network connection
	E0531 18:44:22.430307       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41600: use of closed network connection
	E0531 18:44:22.717157       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41622: use of closed network connection
	E0531 18:44:22.909319       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41640: use of closed network connection
	E0531 18:44:23.098133       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41658: use of closed network connection
	E0531 18:44:23.270228       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41676: use of closed network connection
	E0531 18:44:23.455097       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41700: use of closed network connection
	E0531 18:44:23.635398       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41710: use of closed network connection
	
	
	==> kube-controller-manager [0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801] <==
	I0531 18:44:16.544059       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="612.311µs"
	I0531 18:44:16.624131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.854735ms"
	I0531 18:44:16.624305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.636µs"
	I0531 18:44:16.697741       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.223009ms"
	I0531 18:44:16.697967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="131.446µs"
	I0531 18:44:17.103792       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="66.512µs"
	I0531 18:44:18.115478       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.653µs"
	I0531 18:44:18.124779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.455µs"
	I0531 18:44:18.129445       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.236µs"
	I0531 18:44:20.388943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.45821ms"
	I0531 18:44:20.390209       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="216.792µs"
	I0531 18:44:20.468536       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.398456ms"
	I0531 18:44:20.469533       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="266.293µs"
	E0531 18:44:53.633299       1 certificate_controller.go:146] Sync csr-jvd5s failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-jvd5s": the object has been modified; please apply your changes to the latest version and try again
	I0531 18:44:53.931545       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-080809-m04\" does not exist"
	I0531 18:44:53.946369       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-080809-m04" podCIDRs=["10.244.3.0/24"]
	I0531 18:44:56.716578       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-080809-m04"
	I0531 18:45:04.488190       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-080809-m04"
	I0531 18:45:54.912166       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-080809-m04"
	I0531 18:45:54.968755       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.679704ms"
	I0531 18:45:54.969057       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.292µs"
	I0531 18:45:55.000471       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.782809ms"
	I0531 18:45:55.000587       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.046µs"
	I0531 18:45:55.043756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.163559ms"
	I0531 18:45:55.043989       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.535µs"
	
	
	==> kube-proxy [2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860] <==
	I0531 18:41:48.266762       1 server_linux.go:69] "Using iptables proxy"
	I0531 18:41:48.288049       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.2"]
	I0531 18:41:48.334580       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0531 18:41:48.334639       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0531 18:41:48.334670       1 server_linux.go:165] "Using iptables Proxier"
	I0531 18:41:48.338365       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 18:41:48.338613       1 server.go:872] "Version info" version="v1.30.1"
	I0531 18:41:48.338654       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:41:48.340466       1 config.go:192] "Starting service config controller"
	I0531 18:41:48.340510       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0531 18:41:48.340554       1 config.go:101] "Starting endpoint slice config controller"
	I0531 18:41:48.340581       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0531 18:41:48.341961       1 config.go:319] "Starting node config controller"
	I0531 18:41:48.341997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0531 18:41:48.441094       1 shared_informer.go:320] Caches are synced for service config
	I0531 18:41:48.441214       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0531 18:41:48.442485       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3] <==
	E0531 18:41:31.700889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:41:31.714101       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 18:41:31.714223       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 18:41:31.733132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:41:31.733160       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:41:31.737459       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:41:31.738059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:41:31.800135       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:41:31.800182       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:41:31.814109       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:41:31.814154       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:41:31.815296       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 18:41:31.815429       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0531 18:41:34.722391       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0531 18:44:16.176693       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wqwrw\": pod busybox-fc5497c4f-wqwrw is already assigned to node \"ha-080809-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-wqwrw" node="ha-080809-m02"
	E0531 18:44:16.176962       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod e728434a-8e6b-4486-920a-cf6998e12761(default/busybox-fc5497c4f-wqwrw) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-wqwrw"
	E0531 18:44:16.176998       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-wqwrw\": pod busybox-fc5497c4f-wqwrw is already assigned to node \"ha-080809-m02\"" pod="default/busybox-fc5497c4f-wqwrw"
	I0531 18:44:16.177057       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-wqwrw" node="ha-080809-m02"
	I0531 18:44:16.192752       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="43e80d7c-f669-4515-b508-8a3080ac7715" pod="default/busybox-fc5497c4f-dwnsz" assumedNode="ha-080809-m02" currentNode="ha-080809"
	E0531 18:44:16.199425       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dwnsz\": pod busybox-fc5497c4f-dwnsz is already assigned to node \"ha-080809-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-dwnsz" node="ha-080809"
	E0531 18:44:16.199503       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 43e80d7c-f669-4515-b508-8a3080ac7715(default/busybox-fc5497c4f-dwnsz) was assumed on ha-080809 but assigned to ha-080809-m02" pod="default/busybox-fc5497c4f-dwnsz"
	E0531 18:44:16.199528       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-dwnsz\": pod busybox-fc5497c4f-dwnsz is already assigned to node \"ha-080809-m02\"" pod="default/busybox-fc5497c4f-dwnsz"
	I0531 18:44:16.199583       1 schedule_one.go:1064] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-dwnsz" node="ha-080809-m02"
	E0531 18:44:53.997727       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wrk52\": pod kindnet-wrk52 is already assigned to node \"ha-080809-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wrk52" node="ha-080809-m04"
	E0531 18:44:53.997917       1 schedule_one.go:1051] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wrk52\": pod kindnet-wrk52 is already assigned to node \"ha-080809-m04\"" pod="kube-system/kindnet-wrk52"
	
	
	==> kubelet <==
	May 31 18:44:33 ha-080809 kubelet[1363]: E0531 18:44:33.424151    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:44:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:44:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:44:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:44:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 18:45:33 ha-080809 kubelet[1363]: E0531 18:45:33.423735    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:45:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:45:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:45:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:45:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 18:46:33 ha-080809 kubelet[1363]: E0531 18:46:33.425397    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:46:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:46:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:46:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:46:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 18:47:33 ha-080809 kubelet[1363]: E0531 18:47:33.422224    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:47:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:47:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:47:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:47:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 18:48:33 ha-080809 kubelet[1363]: E0531 18:48:33.430586    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 18:48:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 18:48:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 18:48:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 18:48:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-080809 -n ha-080809
helpers_test.go:261: (dbg) Run:  kubectl --context ha-080809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (819.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-080809 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-080809 -v=7 --alsologtostderr
E0531 18:49:58.917050   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:50:26.601220   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-080809 -v=7 --alsologtostderr: exit status 82 (2m1.902775295s)

                                                
                                                
-- stdout --
	* Stopping node "ha-080809-m04"  ...
	* Stopping node "ha-080809-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:48:38.823235   38882 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:48:38.823502   38882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:38.823512   38882 out.go:304] Setting ErrFile to fd 2...
	I0531 18:48:38.823515   38882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:48:38.823754   38882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:48:38.824017   38882 out.go:298] Setting JSON to false
	I0531 18:48:38.824111   38882 mustload.go:65] Loading cluster: ha-080809
	I0531 18:48:38.824506   38882 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:48:38.824600   38882 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:48:38.824778   38882 mustload.go:65] Loading cluster: ha-080809
	I0531 18:48:38.824942   38882 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:48:38.825002   38882 stop.go:39] StopHost: ha-080809-m04
	I0531 18:48:38.825407   38882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:38.825455   38882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:38.840154   38882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37561
	I0531 18:48:38.840572   38882 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:38.841190   38882 main.go:141] libmachine: Using API Version  1
	I0531 18:48:38.841204   38882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:38.841637   38882 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:38.844423   38882 out.go:177] * Stopping node "ha-080809-m04"  ...
	I0531 18:48:38.845583   38882 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0531 18:48:38.845618   38882 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 18:48:38.845837   38882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0531 18:48:38.845870   38882 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 18:48:38.848295   38882 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:38.848651   38882 main.go:141] libmachine: (ha-080809-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e6:c6:0c", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:44:38 +0000 UTC Type:0 Mac:52:54:00:e6:c6:0c Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:ha-080809-m04 Clientid:01:52:54:00:e6:c6:0c}
	I0531 18:48:38.848684   38882 main.go:141] libmachine: (ha-080809-m04) DBG | domain ha-080809-m04 has defined IP address 192.168.39.40 and MAC address 52:54:00:e6:c6:0c in network mk-ha-080809
	I0531 18:48:38.848834   38882 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHPort
	I0531 18:48:38.849008   38882 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHKeyPath
	I0531 18:48:38.849167   38882 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHUsername
	I0531 18:48:38.849324   38882 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m04/id_rsa Username:docker}
	I0531 18:48:38.933350   38882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0531 18:48:38.987863   38882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0531 18:48:39.043032   38882 main.go:141] libmachine: Stopping "ha-080809-m04"...
	I0531 18:48:39.043083   38882 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:48:39.044392   38882 main.go:141] libmachine: (ha-080809-m04) Calling .Stop
	I0531 18:48:39.047827   38882 main.go:141] libmachine: (ha-080809-m04) Waiting for machine to stop 0/120
	I0531 18:48:40.280718   38882 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 18:48:40.282032   38882 main.go:141] libmachine: Machine "ha-080809-m04" was stopped.
	I0531 18:48:40.282048   38882 stop.go:75] duration metric: took 1.436467572s to stop
	I0531 18:48:40.282087   38882 stop.go:39] StopHost: ha-080809-m03
	I0531 18:48:40.282375   38882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:48:40.282410   38882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:48:40.296475   38882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39065
	I0531 18:48:40.296904   38882 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:48:40.297405   38882 main.go:141] libmachine: Using API Version  1
	I0531 18:48:40.297425   38882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:48:40.297785   38882 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:48:40.299613   38882 out.go:177] * Stopping node "ha-080809-m03"  ...
	I0531 18:48:40.300920   38882 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0531 18:48:40.300952   38882 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 18:48:40.301194   38882 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0531 18:48:40.301211   38882 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 18:48:40.303759   38882 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:40.304221   38882 main.go:141] libmachine: (ha-080809-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9d:3b:ba", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:43:16 +0000 UTC Type:0 Mac:52:54:00:9d:3b:ba Iaid: IPaddr:192.168.39.132 Prefix:24 Hostname:ha-080809-m03 Clientid:01:52:54:00:9d:3b:ba}
	I0531 18:48:40.304259   38882 main.go:141] libmachine: (ha-080809-m03) DBG | domain ha-080809-m03 has defined IP address 192.168.39.132 and MAC address 52:54:00:9d:3b:ba in network mk-ha-080809
	I0531 18:48:40.304427   38882 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHPort
	I0531 18:48:40.304584   38882 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHKeyPath
	I0531 18:48:40.304756   38882 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHUsername
	I0531 18:48:40.304926   38882 sshutil.go:53] new ssh client: &{IP:192.168.39.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m03/id_rsa Username:docker}
	I0531 18:48:40.392407   38882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0531 18:48:40.445588   38882 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0531 18:48:40.502558   38882 main.go:141] libmachine: Stopping "ha-080809-m03"...
	I0531 18:48:40.502584   38882 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 18:48:40.504021   38882 main.go:141] libmachine: (ha-080809-m03) Calling .Stop
	I0531 18:48:40.507348   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 0/120
	I0531 18:48:41.508553   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 1/120
	I0531 18:48:42.509854   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 2/120
	I0531 18:48:43.511297   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 3/120
	I0531 18:48:44.512860   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 4/120
	I0531 18:48:45.514646   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 5/120
	I0531 18:48:46.515810   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 6/120
	I0531 18:48:47.517182   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 7/120
	I0531 18:48:48.519573   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 8/120
	I0531 18:48:49.520877   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 9/120
	I0531 18:48:50.522644   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 10/120
	I0531 18:48:51.524166   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 11/120
	I0531 18:48:52.525887   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 12/120
	I0531 18:48:53.527141   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 13/120
	I0531 18:48:54.528438   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 14/120
	I0531 18:48:55.530090   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 15/120
	I0531 18:48:56.531496   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 16/120
	I0531 18:48:57.533260   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 17/120
	I0531 18:48:58.534581   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 18/120
	I0531 18:48:59.535777   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 19/120
	I0531 18:49:00.537397   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 20/120
	I0531 18:49:01.539278   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 21/120
	I0531 18:49:02.540569   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 22/120
	I0531 18:49:03.541830   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 23/120
	I0531 18:49:04.543126   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 24/120
	I0531 18:49:05.544569   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 25/120
	I0531 18:49:06.545963   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 26/120
	I0531 18:49:07.547334   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 27/120
	I0531 18:49:08.548859   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 28/120
	I0531 18:49:09.550299   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 29/120
	I0531 18:49:10.552114   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 30/120
	I0531 18:49:11.553601   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 31/120
	I0531 18:49:12.555240   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 32/120
	I0531 18:49:13.556340   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 33/120
	I0531 18:49:14.557656   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 34/120
	I0531 18:49:15.559157   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 35/120
	I0531 18:49:16.560310   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 36/120
	I0531 18:49:17.561688   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 37/120
	I0531 18:49:18.562896   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 38/120
	I0531 18:49:19.564058   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 39/120
	I0531 18:49:20.565258   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 40/120
	I0531 18:49:21.566371   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 41/120
	I0531 18:49:22.567699   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 42/120
	I0531 18:49:23.568846   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 43/120
	I0531 18:49:24.570001   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 44/120
	I0531 18:49:25.571540   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 45/120
	I0531 18:49:26.572777   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 46/120
	I0531 18:49:27.574005   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 47/120
	I0531 18:49:28.575220   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 48/120
	I0531 18:49:29.576681   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 49/120
	I0531 18:49:30.578429   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 50/120
	I0531 18:49:31.579798   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 51/120
	I0531 18:49:32.581041   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 52/120
	I0531 18:49:33.582403   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 53/120
	I0531 18:49:34.583696   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 54/120
	I0531 18:49:35.585399   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 55/120
	I0531 18:49:36.586768   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 56/120
	I0531 18:49:37.587955   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 57/120
	I0531 18:49:38.589396   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 58/120
	I0531 18:49:39.590771   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 59/120
	I0531 18:49:40.592692   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 60/120
	I0531 18:49:41.594051   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 61/120
	I0531 18:49:42.595256   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 62/120
	I0531 18:49:43.596458   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 63/120
	I0531 18:49:44.597633   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 64/120
	I0531 18:49:45.599342   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 65/120
	I0531 18:49:46.600599   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 66/120
	I0531 18:49:47.601890   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 67/120
	I0531 18:49:48.603139   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 68/120
	I0531 18:49:49.604417   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 69/120
	I0531 18:49:50.606181   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 70/120
	I0531 18:49:51.607674   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 71/120
	I0531 18:49:52.608986   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 72/120
	I0531 18:49:53.610235   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 73/120
	I0531 18:49:54.611638   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 74/120
	I0531 18:49:55.613287   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 75/120
	I0531 18:49:56.614715   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 76/120
	I0531 18:49:57.616063   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 77/120
	I0531 18:49:58.617362   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 78/120
	I0531 18:49:59.618722   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 79/120
	I0531 18:50:00.620439   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 80/120
	I0531 18:50:01.621713   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 81/120
	I0531 18:50:02.623006   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 82/120
	I0531 18:50:03.624241   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 83/120
	I0531 18:50:04.625506   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 84/120
	I0531 18:50:05.627072   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 85/120
	I0531 18:50:06.628288   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 86/120
	I0531 18:50:07.629707   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 87/120
	I0531 18:50:08.630998   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 88/120
	I0531 18:50:09.632134   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 89/120
	I0531 18:50:10.633750   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 90/120
	I0531 18:50:11.635376   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 91/120
	I0531 18:50:12.636586   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 92/120
	I0531 18:50:13.637847   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 93/120
	I0531 18:50:14.639285   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 94/120
	I0531 18:50:15.640760   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 95/120
	I0531 18:50:16.642126   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 96/120
	I0531 18:50:17.643333   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 97/120
	I0531 18:50:18.644667   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 98/120
	I0531 18:50:19.646005   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 99/120
	I0531 18:50:20.647160   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 100/120
	I0531 18:50:21.648272   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 101/120
	I0531 18:50:22.649667   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 102/120
	I0531 18:50:23.651247   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 103/120
	I0531 18:50:24.652518   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 104/120
	I0531 18:50:25.654083   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 105/120
	I0531 18:50:26.655336   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 106/120
	I0531 18:50:27.656531   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 107/120
	I0531 18:50:28.657948   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 108/120
	I0531 18:50:29.659071   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 109/120
	I0531 18:50:30.660563   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 110/120
	I0531 18:50:31.661804   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 111/120
	I0531 18:50:32.662981   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 112/120
	I0531 18:50:33.664252   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 113/120
	I0531 18:50:34.665813   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 114/120
	I0531 18:50:35.668093   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 115/120
	I0531 18:50:36.669492   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 116/120
	I0531 18:50:37.671462   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 117/120
	I0531 18:50:38.672895   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 118/120
	I0531 18:50:39.674325   38882 main.go:141] libmachine: (ha-080809-m03) Waiting for machine to stop 119/120
	I0531 18:50:40.675073   38882 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0531 18:50:40.675154   38882 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0531 18:50:40.677452   38882 out.go:177] 
	W0531 18:50:40.678999   38882 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0531 18:50:40.679018   38882 out.go:239] * 
	* 
	W0531 18:50:40.681388   38882 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 18:50:40.682570   38882 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-080809 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-080809 --wait=true -v=7 --alsologtostderr
E0531 18:54:58.916677   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:59:58.916315   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 19:01:21.961810   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-080809 --wait=true -v=7 --alsologtostderr: exit status 80 (11m35.291155538s)

                                                
                                                
-- stdout --
	* [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	* Updating the running kvm2 "ha-080809" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-080809-m02" control-plane node in "ha-080809" cluster
	* Restarting existing kvm2 VM for "ha-080809-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.2
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:50:40.725150   39355 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:50:40.725299   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:50:40.725310   39355 out.go:304] Setting ErrFile to fd 2...
	I0531 18:50:40.725314   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:50:40.725806   39355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:50:40.726393   39355 out.go:298] Setting JSON to false
	I0531 18:50:40.727315   39355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5587,"bootTime":1717175854,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:50:40.727373   39355 start.go:139] virtualization: kvm guest
	I0531 18:50:40.729712   39355 out.go:177] * [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:50:40.730928   39355 notify.go:220] Checking for updates...
	I0531 18:50:40.730934   39355 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 18:50:40.732260   39355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:50:40.733721   39355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:50:40.734983   39355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:50:40.736160   39355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:50:40.737350   39355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:50:40.738897   39355 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:50:40.738998   39355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 18:50:40.739474   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:50:40.739526   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:50:40.754250   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0531 18:50:40.754604   39355 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:50:40.755116   39355 main.go:141] libmachine: Using API Version  1
	I0531 18:50:40.755134   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:50:40.755455   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:50:40.755636   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.791266   39355 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 18:50:40.792482   39355 start.go:297] selected driver: kvm2
	I0531 18:50:40.792498   39355 start.go:901] validating driver "kvm2" against &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:50:40.792668   39355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:50:40.793032   39355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:50:40.793116   39355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 18:50:40.807323   39355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 18:50:40.808038   39355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:50:40.808095   39355 cni.go:84] Creating CNI manager for ""
	I0531 18:50:40.808107   39355 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 18:50:40.808161   39355 start.go:340] cluster config:
	{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:50:40.808298   39355 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:50:40.810090   39355 out.go:177] * Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	I0531 18:50:40.811542   39355 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:50:40.811577   39355 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 18:50:40.811587   39355 cache.go:56] Caching tarball of preloaded images
	I0531 18:50:40.811660   39355 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:50:40.811671   39355 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:50:40.811776   39355 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:50:40.811945   39355 start.go:360] acquireMachinesLock for ha-080809: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:50:40.811981   39355 start.go:364] duration metric: took 19.562µs to acquireMachinesLock for "ha-080809"
	I0531 18:50:40.811994   39355 start.go:96] Skipping create...Using existing machine configuration
	I0531 18:50:40.811999   39355 fix.go:54] fixHost starting: 
	I0531 18:50:40.812235   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:50:40.812263   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:50:40.825678   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I0531 18:50:40.826052   39355 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:50:40.826541   39355 main.go:141] libmachine: Using API Version  1
	I0531 18:50:40.826561   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:50:40.826941   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:50:40.827113   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.827285   39355 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:50:40.828664   39355 fix.go:112] recreateIfNeeded on ha-080809: state=Running err=<nil>
	W0531 18:50:40.828681   39355 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 18:50:40.830849   39355 out.go:177] * Updating the running kvm2 "ha-080809" VM ...
	I0531 18:50:40.832124   39355 machine.go:94] provisionDockerMachine start ...
	I0531 18:50:40.832140   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.832325   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:40.834878   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.835311   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:40.835338   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.835452   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:40.835609   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.835750   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.835893   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:40.836053   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:40.836242   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:40.836254   39355 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 18:50:40.946220   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:50:40.946244   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:40.946488   39355 buildroot.go:166] provisioning hostname "ha-080809"
	I0531 18:50:40.946518   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:40.946731   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:40.949045   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.949516   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:40.949544   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.949665   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:40.949823   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.949971   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.950075   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:40.950241   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:40.950442   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:40.950457   39355 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809 && echo "ha-080809" | sudo tee /etc/hostname
	I0531 18:50:41.073470   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:50:41.073498   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.076183   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.076574   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.076607   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.076794   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.076941   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.077115   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.077244   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.077375   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:41.077573   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:41.077596   39355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:50:41.181990   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:50:41.182021   39355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:50:41.182043   39355 buildroot.go:174] setting up certificates
	I0531 18:50:41.182054   39355 provision.go:84] configureAuth start
	I0531 18:50:41.182082   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:41.182335   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:50:41.184668   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.185020   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.185038   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.185194   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.187153   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.187566   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.187588   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.187756   39355 provision.go:143] copyHostCerts
	I0531 18:50:41.187785   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:50:41.187832   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:50:41.187841   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:50:41.187901   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:50:41.188000   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:50:41.188020   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:50:41.188027   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:50:41.188052   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:50:41.188106   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:50:41.188122   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:50:41.188128   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:50:41.188150   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:50:41.188221   39355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809 san=[127.0.0.1 192.168.39.2 ha-080809 localhost minikube]
	I0531 18:50:41.316568   39355 provision.go:177] copyRemoteCerts
	I0531 18:50:41.316621   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:50:41.316641   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.319005   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.319346   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.319370   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.319552   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.319700   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.319815   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.319912   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:41.404144   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:50:41.404210   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:50:41.433385   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:50:41.433444   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0531 18:50:41.460422   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:50:41.460475   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:50:41.492171   39355 provision.go:87] duration metric: took 310.107503ms to configureAuth
	I0531 18:50:41.492196   39355 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:50:41.492389   39355 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:50:41.492450   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.494816   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.495193   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.495231   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.495385   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.495552   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.495679   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.495809   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.495952   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:41.496153   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:41.496176   39355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:50:47.173907   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:50:47.173967   39355 machine.go:97] duration metric: took 6.341813911s to provisionDockerMachine
	I0531 18:50:47.173978   39355 start.go:293] postStartSetup for "ha-080809" (driver="kvm2")
	I0531 18:50:47.173987   39355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:50:47.174004   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.174368   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:50:47.174390   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.176840   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.177242   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.177271   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.177380   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.177589   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.177762   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.177941   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.259542   39355 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:50:47.263894   39355 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:50:47.263914   39355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:50:47.263964   39355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:50:47.264048   39355 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:50:47.264060   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:50:47.264146   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:50:47.273268   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:50:47.296570   39355 start.go:296] duration metric: took 122.58202ms for postStartSetup
	I0531 18:50:47.296610   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.296878   39355 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0531 18:50:47.296907   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.299205   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.299525   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.299561   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.299648   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.299826   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.299968   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.300101   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	W0531 18:50:47.383092   39355 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0531 18:50:47.383115   39355 fix.go:56] duration metric: took 6.571114673s for fixHost
	I0531 18:50:47.383140   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.385517   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.385862   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.385890   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.386049   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.386232   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.386383   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.386490   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.386627   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:47.386788   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:47.386798   39355 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0531 18:50:47.493714   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717181447.480513081
	
	I0531 18:50:47.493743   39355 fix.go:216] guest clock: 1717181447.480513081
	I0531 18:50:47.493763   39355 fix.go:229] Guest: 2024-05-31 18:50:47.480513081 +0000 UTC Remote: 2024-05-31 18:50:47.383123084 +0000 UTC m=+6.690893529 (delta=97.389997ms)
	I0531 18:50:47.493786   39355 fix.go:200] guest clock delta is within tolerance: 97.389997ms
	I0531 18:50:47.493793   39355 start.go:83] releasing machines lock for "ha-080809", held for 6.681803207s
	I0531 18:50:47.493817   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.494049   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:50:47.496470   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.496822   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.496850   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.496953   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497467   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497627   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497712   39355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:50:47.497764   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.497783   39355 ssh_runner.go:195] Run: cat /version.json
	I0531 18:50:47.497813   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.500034   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500151   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500362   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.500384   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500483   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.500640   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.500660   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500664   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.500827   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.500848   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.501008   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.501126   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.501254   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.501378   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.622196   39355 ssh_runner.go:195] Run: systemctl --version
	I0531 18:50:47.697116   39355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:50:48.062218   39355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:50:48.099401   39355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:50:48.099472   39355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:50:48.196165   39355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 18:50:48.196190   39355 start.go:494] detecting cgroup driver to use...
	I0531 18:50:48.196275   39355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:50:48.277871   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:50:48.348651   39355 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:50:48.348716   39355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:50:48.390576   39355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:50:48.527080   39355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:50:48.863562   39355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:50:49.104326   39355 docker.go:233] disabling docker service ...
	I0531 18:50:49.104389   39355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:50:49.125823   39355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:50:49.143789   39355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:50:49.335309   39355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:50:49.535963   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:50:49.550940   39355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:50:49.572576   39355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:50:49.572644   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.583923   39355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:50:49.583975   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.595036   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.608828   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.622691   39355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:50:49.635836   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.648354   39355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.660121   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.671040   39355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:50:49.681113   39355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:50:49.690673   39355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:50:49.873408   39355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:52:20.348721   39355 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475242436s)
	I0531 18:52:20.348763   39355 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:52:20.348839   39355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:52:20.355179   39355 start.go:562] Will wait 60s for crictl version
	I0531 18:52:20.355233   39355 ssh_runner.go:195] Run: which crictl
	I0531 18:52:20.359350   39355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:52:20.402386   39355 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:52:20.402469   39355 ssh_runner.go:195] Run: crio --version
	I0531 18:52:20.434311   39355 ssh_runner.go:195] Run: crio --version
	I0531 18:52:20.472611   39355 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:52:20.474062   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:52:20.477170   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:52:20.477634   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:52:20.477660   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:52:20.477870   39355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:52:20.483055   39355 kubeadm.go:877] updating cluster {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 18:52:20.483208   39355 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:52:20.483252   39355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:52:20.532843   39355 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:52:20.532863   39355 crio.go:433] Images already preloaded, skipping extraction
	I0531 18:52:20.532904   39355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:52:20.576108   39355 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:52:20.576137   39355 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:52:20.576147   39355 kubeadm.go:928] updating node { 192.168.39.2 8443 v1.30.1 crio true true} ...
	I0531 18:52:20.576283   39355 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:52:20.576360   39355 ssh_runner.go:195] Run: crio config
	I0531 18:52:20.622002   39355 cni.go:84] Creating CNI manager for ""
	I0531 18:52:20.622029   39355 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 18:52:20.622042   39355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 18:52:20.622076   39355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-080809 NodeName:ha-080809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:52:20.622260   39355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-080809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:52:20.622285   39355 kube-vip.go:115] generating kube-vip config ...
	I0531 18:52:20.622331   39355 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:52:20.635091   39355 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:52:20.635214   39355 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:52:20.635270   39355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:52:20.645725   39355 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:52:20.645808   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0531 18:52:20.655828   39355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0531 18:52:20.672625   39355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:52:20.690003   39355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0531 18:52:20.706882   39355 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 18:52:20.725424   39355 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:52:20.732286   39355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:52:20.887007   39355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:52:20.902911   39355 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.2
	I0531 18:52:20.902934   39355 certs.go:194] generating shared ca certs ...
	I0531 18:52:20.902958   39355 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:20.903128   39355 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:52:20.903167   39355 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:52:20.903177   39355 certs.go:256] generating profile certs ...
	I0531 18:52:20.903257   39355 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:52:20.903294   39355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091
	I0531 18:52:20.903308   39355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.51 192.168.39.132 192.168.39.254]
	I0531 18:52:21.121858   39355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 ...
	I0531 18:52:21.121891   39355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091: {Name:mk82bb224e9d3af07dce0b1684abbe8baa7e4f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:21.122065   39355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091 ...
	I0531 18:52:21.122077   39355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091: {Name:mk76a457401d3f1d1b72821102855c52f0a5de63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:21.122151   39355 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:52:21.122297   39355 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:52:21.122420   39355 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:52:21.122434   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:52:21.122446   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:52:21.122459   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:52:21.122469   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:52:21.122482   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:52:21.122494   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:52:21.122505   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:52:21.122517   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:52:21.122566   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:52:21.122594   39355 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:52:21.122603   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:52:21.122625   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:52:21.122649   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:52:21.122670   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:52:21.122704   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:52:21.122728   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.122742   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.122754   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.123310   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:52:21.148456   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:52:21.172795   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:52:21.197175   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:52:21.221688   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0531 18:52:21.245580   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:52:21.269278   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:52:21.292738   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:52:21.316727   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:52:21.340739   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:52:21.363482   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:52:21.386799   39355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:52:21.403566   39355 ssh_runner.go:195] Run: openssl version
	I0531 18:52:21.410046   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:52:21.422130   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.427059   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.427130   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.433755   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:52:21.444010   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:52:21.455312   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.459971   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.460018   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.465518   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:52:21.475141   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:52:21.486477   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.491035   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.491075   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.497086   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:52:21.506659   39355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:52:21.511111   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 18:52:21.516686   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 18:52:21.522265   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 18:52:21.527836   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 18:52:21.533354   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 18:52:21.539272   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 18:52:21.544898   39355 kubeadm.go:391] StartCluster: {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:52:21.545057   39355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:52:21.545119   39355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:52:21.593095   39355 cri.go:89] found id: "b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e"
	I0531 18:52:21.593117   39355 cri.go:89] found id: "6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89"
	I0531 18:52:21.593121   39355 cri.go:89] found id: "4ddbab57592d0103bc6815778479db8a6144425de981bd8791af83d84165980a"
	I0531 18:52:21.593124   39355 cri.go:89] found id: "7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208"
	I0531 18:52:21.593126   39355 cri.go:89] found id: "02c77070fec86c68c73045dc1e91d4efbc9b5849cc419eacc4885001a015d503"
	I0531 18:52:21.593129   39355 cri.go:89] found id: "d110c3c293be8dd490b1b1832bc510e0413fbcd04faeaf6434250ec703f1fa63"
	I0531 18:52:21.593132   39355 cri.go:89] found id: "3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7"
	I0531 18:52:21.593144   39355 cri.go:89] found id: "6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783"
	I0531 18:52:21.593148   39355 cri.go:89] found id: "dfee637100fd1d000d553477c129ce4eeb3b6cd89076f634caaf1544c2c4474b"
	I0531 18:52:21.593155   39355 cri.go:89] found id: "2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492"
	I0531 18:52:21.593163   39355 cri.go:89] found id: "54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679"
	I0531 18:52:21.593167   39355 cri.go:89] found id: "2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860"
	I0531 18:52:21.593171   39355 cri.go:89] found id: "817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3"
	I0531 18:52:21.593176   39355 cri.go:89] found id: "588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e"
	I0531 18:52:21.593182   39355 cri.go:89] found id: "0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801"
	I0531 18:52:21.593188   39355 cri.go:89] found id: ""
	I0531 18:52:21.593238   39355 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-080809 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-080809
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-080809 -n ha-080809
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-080809 logs -n 25: (1.535779758s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m04 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp testdata/cp-test.txt                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809:/home/docker/cp-test_ha-080809-m04_ha-080809.txt                       |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809 sudo cat                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809.txt                                 |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03:/home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m03 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-080809 node stop m02 -v=7                                                     | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-080809 node start m02 -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-080809 -v=7                                                           | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-080809 -v=7                                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-080809 --wait=true -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-080809                                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 18:50:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:50:40.725150   39355 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:50:40.725299   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:50:40.725310   39355 out.go:304] Setting ErrFile to fd 2...
	I0531 18:50:40.725314   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:50:40.725806   39355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:50:40.726393   39355 out.go:298] Setting JSON to false
	I0531 18:50:40.727315   39355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5587,"bootTime":1717175854,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:50:40.727373   39355 start.go:139] virtualization: kvm guest
	I0531 18:50:40.729712   39355 out.go:177] * [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:50:40.730928   39355 notify.go:220] Checking for updates...
	I0531 18:50:40.730934   39355 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 18:50:40.732260   39355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:50:40.733721   39355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:50:40.734983   39355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:50:40.736160   39355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:50:40.737350   39355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:50:40.738897   39355 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:50:40.738998   39355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 18:50:40.739474   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:50:40.739526   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:50:40.754250   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0531 18:50:40.754604   39355 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:50:40.755116   39355 main.go:141] libmachine: Using API Version  1
	I0531 18:50:40.755134   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:50:40.755455   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:50:40.755636   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.791266   39355 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 18:50:40.792482   39355 start.go:297] selected driver: kvm2
	I0531 18:50:40.792498   39355 start.go:901] validating driver "kvm2" against &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:50:40.792668   39355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:50:40.793032   39355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:50:40.793116   39355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 18:50:40.807323   39355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 18:50:40.808038   39355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:50:40.808095   39355 cni.go:84] Creating CNI manager for ""
	I0531 18:50:40.808107   39355 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 18:50:40.808161   39355 start.go:340] cluster config:
	{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:50:40.808298   39355 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:50:40.810090   39355 out.go:177] * Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	I0531 18:50:40.811542   39355 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:50:40.811577   39355 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 18:50:40.811587   39355 cache.go:56] Caching tarball of preloaded images
	I0531 18:50:40.811660   39355 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:50:40.811671   39355 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:50:40.811776   39355 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:50:40.811945   39355 start.go:360] acquireMachinesLock for ha-080809: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:50:40.811981   39355 start.go:364] duration metric: took 19.562µs to acquireMachinesLock for "ha-080809"
	I0531 18:50:40.811994   39355 start.go:96] Skipping create...Using existing machine configuration
	I0531 18:50:40.811999   39355 fix.go:54] fixHost starting: 
	I0531 18:50:40.812235   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:50:40.812263   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:50:40.825678   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I0531 18:50:40.826052   39355 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:50:40.826541   39355 main.go:141] libmachine: Using API Version  1
	I0531 18:50:40.826561   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:50:40.826941   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:50:40.827113   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.827285   39355 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:50:40.828664   39355 fix.go:112] recreateIfNeeded on ha-080809: state=Running err=<nil>
	W0531 18:50:40.828681   39355 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 18:50:40.830849   39355 out.go:177] * Updating the running kvm2 "ha-080809" VM ...
	I0531 18:50:40.832124   39355 machine.go:94] provisionDockerMachine start ...
	I0531 18:50:40.832140   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.832325   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:40.834878   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.835311   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:40.835338   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.835452   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:40.835609   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.835750   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.835893   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:40.836053   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:40.836242   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:40.836254   39355 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 18:50:40.946220   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:50:40.946244   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:40.946488   39355 buildroot.go:166] provisioning hostname "ha-080809"
	I0531 18:50:40.946518   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:40.946731   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:40.949045   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.949516   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:40.949544   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.949665   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:40.949823   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.949971   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.950075   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:40.950241   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:40.950442   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:40.950457   39355 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809 && echo "ha-080809" | sudo tee /etc/hostname
	I0531 18:50:41.073470   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:50:41.073498   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.076183   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.076574   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.076607   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.076794   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.076941   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.077115   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.077244   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.077375   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:41.077573   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:41.077596   39355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:50:41.181990   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:50:41.182021   39355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:50:41.182043   39355 buildroot.go:174] setting up certificates
	I0531 18:50:41.182054   39355 provision.go:84] configureAuth start
	I0531 18:50:41.182082   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:41.182335   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:50:41.184668   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.185020   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.185038   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.185194   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.187153   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.187566   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.187588   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.187756   39355 provision.go:143] copyHostCerts
	I0531 18:50:41.187785   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:50:41.187832   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:50:41.187841   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:50:41.187901   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:50:41.188000   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:50:41.188020   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:50:41.188027   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:50:41.188052   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:50:41.188106   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:50:41.188122   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:50:41.188128   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:50:41.188150   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:50:41.188221   39355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809 san=[127.0.0.1 192.168.39.2 ha-080809 localhost minikube]
	I0531 18:50:41.316568   39355 provision.go:177] copyRemoteCerts
	I0531 18:50:41.316621   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:50:41.316641   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.319005   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.319346   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.319370   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.319552   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.319700   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.319815   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.319912   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:41.404144   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:50:41.404210   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:50:41.433385   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:50:41.433444   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0531 18:50:41.460422   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:50:41.460475   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:50:41.492171   39355 provision.go:87] duration metric: took 310.107503ms to configureAuth
	I0531 18:50:41.492196   39355 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:50:41.492389   39355 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:50:41.492450   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.494816   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.495193   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.495231   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.495385   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.495552   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.495679   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.495809   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.495952   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:41.496153   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:41.496176   39355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:50:47.173907   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:50:47.173967   39355 machine.go:97] duration metric: took 6.341813911s to provisionDockerMachine
	I0531 18:50:47.173978   39355 start.go:293] postStartSetup for "ha-080809" (driver="kvm2")
	I0531 18:50:47.173987   39355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:50:47.174004   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.174368   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:50:47.174390   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.176840   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.177242   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.177271   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.177380   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.177589   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.177762   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.177941   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.259542   39355 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:50:47.263894   39355 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:50:47.263914   39355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:50:47.263964   39355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:50:47.264048   39355 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:50:47.264060   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:50:47.264146   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:50:47.273268   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:50:47.296570   39355 start.go:296] duration metric: took 122.58202ms for postStartSetup
	I0531 18:50:47.296610   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.296878   39355 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0531 18:50:47.296907   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.299205   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.299525   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.299561   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.299648   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.299826   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.299968   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.300101   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	W0531 18:50:47.383092   39355 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0531 18:50:47.383115   39355 fix.go:56] duration metric: took 6.571114673s for fixHost
	I0531 18:50:47.383140   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.385517   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.385862   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.385890   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.386049   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.386232   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.386383   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.386490   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.386627   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:47.386788   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:47.386798   39355 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:50:47.493714   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717181447.480513081
	
	I0531 18:50:47.493743   39355 fix.go:216] guest clock: 1717181447.480513081
	I0531 18:50:47.493763   39355 fix.go:229] Guest: 2024-05-31 18:50:47.480513081 +0000 UTC Remote: 2024-05-31 18:50:47.383123084 +0000 UTC m=+6.690893529 (delta=97.389997ms)
	I0531 18:50:47.493786   39355 fix.go:200] guest clock delta is within tolerance: 97.389997ms
	I0531 18:50:47.493793   39355 start.go:83] releasing machines lock for "ha-080809", held for 6.681803207s
	I0531 18:50:47.493817   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.494049   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:50:47.496470   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.496822   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.496850   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.496953   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497467   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497627   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497712   39355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:50:47.497764   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.497783   39355 ssh_runner.go:195] Run: cat /version.json
	I0531 18:50:47.497813   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.500034   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500151   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500362   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.500384   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500483   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.500640   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.500660   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500664   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.500827   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.500848   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.501008   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.501126   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.501254   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.501378   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.622196   39355 ssh_runner.go:195] Run: systemctl --version
	I0531 18:50:47.697116   39355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:50:48.062218   39355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:50:48.099401   39355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:50:48.099472   39355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:50:48.196165   39355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 18:50:48.196190   39355 start.go:494] detecting cgroup driver to use...
	I0531 18:50:48.196275   39355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:50:48.277871   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:50:48.348651   39355 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:50:48.348716   39355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:50:48.390576   39355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:50:48.527080   39355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:50:48.863562   39355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:50:49.104326   39355 docker.go:233] disabling docker service ...
	I0531 18:50:49.104389   39355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:50:49.125823   39355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:50:49.143789   39355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:50:49.335309   39355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:50:49.535963   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:50:49.550940   39355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:50:49.572576   39355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:50:49.572644   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.583923   39355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:50:49.583975   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.595036   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.608828   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.622691   39355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:50:49.635836   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.648354   39355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.660121   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.671040   39355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:50:49.681113   39355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:50:49.690673   39355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:50:49.873408   39355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:52:20.348721   39355 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475242436s)
	I0531 18:52:20.348763   39355 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:52:20.348839   39355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:52:20.355179   39355 start.go:562] Will wait 60s for crictl version
	I0531 18:52:20.355233   39355 ssh_runner.go:195] Run: which crictl
	I0531 18:52:20.359350   39355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:52:20.402386   39355 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:52:20.402469   39355 ssh_runner.go:195] Run: crio --version
	I0531 18:52:20.434311   39355 ssh_runner.go:195] Run: crio --version
	I0531 18:52:20.472611   39355 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:52:20.474062   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:52:20.477170   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:52:20.477634   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:52:20.477660   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:52:20.477870   39355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:52:20.483055   39355 kubeadm.go:877] updating cluster {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 18:52:20.483208   39355 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:52:20.483252   39355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:52:20.532843   39355 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:52:20.532863   39355 crio.go:433] Images already preloaded, skipping extraction
	I0531 18:52:20.532904   39355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:52:20.576108   39355 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:52:20.576137   39355 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:52:20.576147   39355 kubeadm.go:928] updating node { 192.168.39.2 8443 v1.30.1 crio true true} ...
	I0531 18:52:20.576283   39355 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:52:20.576360   39355 ssh_runner.go:195] Run: crio config
	I0531 18:52:20.622002   39355 cni.go:84] Creating CNI manager for ""
	I0531 18:52:20.622029   39355 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 18:52:20.622042   39355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 18:52:20.622076   39355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-080809 NodeName:ha-080809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:52:20.622260   39355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-080809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:52:20.622285   39355 kube-vip.go:115] generating kube-vip config ...
	I0531 18:52:20.622331   39355 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:52:20.635091   39355 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:52:20.635214   39355 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:52:20.635270   39355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:52:20.645725   39355 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:52:20.645808   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0531 18:52:20.655828   39355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0531 18:52:20.672625   39355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:52:20.690003   39355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0531 18:52:20.706882   39355 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 18:52:20.725424   39355 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:52:20.732286   39355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:52:20.887007   39355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:52:20.902911   39355 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.2
	I0531 18:52:20.902934   39355 certs.go:194] generating shared ca certs ...
	I0531 18:52:20.902958   39355 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:20.903128   39355 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:52:20.903167   39355 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:52:20.903177   39355 certs.go:256] generating profile certs ...
	I0531 18:52:20.903257   39355 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:52:20.903294   39355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091
	I0531 18:52:20.903308   39355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.51 192.168.39.132 192.168.39.254]
	I0531 18:52:21.121858   39355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 ...
	I0531 18:52:21.121891   39355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091: {Name:mk82bb224e9d3af07dce0b1684abbe8baa7e4f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:21.122065   39355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091 ...
	I0531 18:52:21.122077   39355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091: {Name:mk76a457401d3f1d1b72821102855c52f0a5de63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:21.122151   39355 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:52:21.122297   39355 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:52:21.122420   39355 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:52:21.122434   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:52:21.122446   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:52:21.122459   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:52:21.122469   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:52:21.122482   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:52:21.122494   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:52:21.122505   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:52:21.122517   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:52:21.122566   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:52:21.122594   39355 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:52:21.122603   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:52:21.122625   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:52:21.122649   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:52:21.122670   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:52:21.122704   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:52:21.122728   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.122742   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.122754   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.123310   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:52:21.148456   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:52:21.172795   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:52:21.197175   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:52:21.221688   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0531 18:52:21.245580   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:52:21.269278   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:52:21.292738   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:52:21.316727   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:52:21.340739   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:52:21.363482   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:52:21.386799   39355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:52:21.403566   39355 ssh_runner.go:195] Run: openssl version
	I0531 18:52:21.410046   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:52:21.422130   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.427059   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.427130   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.433755   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:52:21.444010   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:52:21.455312   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.459971   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.460018   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.465518   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:52:21.475141   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:52:21.486477   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.491035   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.491075   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.497086   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:52:21.506659   39355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:52:21.511111   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 18:52:21.516686   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 18:52:21.522265   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 18:52:21.527836   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 18:52:21.533354   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 18:52:21.539272   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 18:52:21.544898   39355 kubeadm.go:391] StartCluster: {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:52:21.545057   39355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:52:21.545119   39355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:52:21.593095   39355 cri.go:89] found id: "b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e"
	I0531 18:52:21.593117   39355 cri.go:89] found id: "6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89"
	I0531 18:52:21.593121   39355 cri.go:89] found id: "4ddbab57592d0103bc6815778479db8a6144425de981bd8791af83d84165980a"
	I0531 18:52:21.593124   39355 cri.go:89] found id: "7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208"
	I0531 18:52:21.593126   39355 cri.go:89] found id: "02c77070fec86c68c73045dc1e91d4efbc9b5849cc419eacc4885001a015d503"
	I0531 18:52:21.593129   39355 cri.go:89] found id: "d110c3c293be8dd490b1b1832bc510e0413fbcd04faeaf6434250ec703f1fa63"
	I0531 18:52:21.593132   39355 cri.go:89] found id: "3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7"
	I0531 18:52:21.593144   39355 cri.go:89] found id: "6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783"
	I0531 18:52:21.593148   39355 cri.go:89] found id: "dfee637100fd1d000d553477c129ce4eeb3b6cd89076f634caaf1544c2c4474b"
	I0531 18:52:21.593155   39355 cri.go:89] found id: "2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492"
	I0531 18:52:21.593163   39355 cri.go:89] found id: "54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679"
	I0531 18:52:21.593167   39355 cri.go:89] found id: "2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860"
	I0531 18:52:21.593171   39355 cri.go:89] found id: "817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3"
	I0531 18:52:21.593176   39355 cri.go:89] found id: "588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e"
	I0531 18:52:21.593182   39355 cri.go:89] found id: "0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801"
	I0531 18:52:21.593188   39355 cri.go:89] found id: ""
	I0531 18:52:21.593238   39355 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.680302553Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0086e28e-fda8-4a76-8363-db3c15489632 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.681681628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=01ccbcad-dc2c-43ec-b6c4-ab3b9bec7359 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.682263285Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182136682240738,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=01ccbcad-dc2c-43ec-b6c4-ab3b9bec7359 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.682995974Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6a8f731-51a5-4889-a59d-5ac0ef8fafac name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.683051040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6a8f731-51a5-4889-a59d-5ac0ef8fafac name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.683348296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717181598426220347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.
kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717181544308700139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8f
f4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39
487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9cea
ccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6a8f731-51a5-4889-a59d-5ac0ef8fafac name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.719998758Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=95a7053b-234e-4a5e-baad-bdb10212a838 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.720070923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=95a7053b-234e-4a5e-baad-bdb10212a838 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.722094905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13e7a0c3-21e5-4484-b785-09d62974260a name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.722548157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182136722526333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13e7a0c3-21e5-4484-b785-09d62974260a name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.723338741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be10c96c-ae0b-4271-ab76-e8de2e3b95b3 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.723414361Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be10c96c-ae0b-4271-ab76-e8de2e3b95b3 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.723730621Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717181598426220347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.
kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717181544308700139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8f
f4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39
487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9cea
ccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be10c96c-ae0b-4271-ab76-e8de2e3b95b3 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.742576984Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=0206781d-7035-4ea2-861d-12334b4c33ba name=/runtime.v1.RuntimeService/ListPodSandbox
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.742950522Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-mjhlv,Uid:61a4c85e-dc75-49ee-82cc-90f06c8d28b2,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1717181944214995919,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-31T18:59:03.907071609Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-080809,Uid:980b40354a0f55c9ad17d5a06e5769c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1717181561366006209,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{kubernetes.io/config.hash: 980b40354a0f55c9ad17d5a06e5769c9,kubernetes.io/config.seen: 2024-05-31T18:52:20.709330120Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-kbkpk,Uid:1de171b4-ac9e-42d1-be02-d585d0d01587,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717181545808915514,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05
-31T18:41:52.783790656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-cr2wg,Uid:9f90f810-88d7-443c-8537-1eb0c65417be,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717181544093570775,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-31T18:41:52.786549464Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-080809,Uid:b78e0f15ef2137b451056daa67a462ca,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717181544079368665,Labels:map[str
ing]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b78e0f15ef2137b451056daa67a462ca,kubernetes.io/config.seen: 2024-05-31T18:41:33.320349700Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-080809,Uid:63f626d576ff021f9c6d97fcb391e443,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717181544074466381,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/
kube-apiserver.advertise-address.endpoint: 192.168.39.2:8443,kubernetes.io/config.hash: 63f626d576ff021f9c6d97fcb391e443,kubernetes.io/config.seen: 2024-05-31T18:41:33.320348657Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&PodSandboxMetadata{Name:etcd-ha-080809,Uid:0533be9b0d7be6e58a747756dff0c7e6,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1717181544037455759,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.2:2379,kubernetes.io/config.hash: 0533be9b0d7be6e58a747756dff0c7e6,kubernetes.io/config.seen: 2024-05-31T18:41:33.320347488Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4ea7aaf55607803
3d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-cr2wg,Uid:9f90f810-88d7-443c-8537-1eb0c65417be,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1717181447986402943,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-31T18:41:52.786549464Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-kbkpk,Uid:1de171b4-ac9e-42d1-be02-d585d0d01587,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1717181447673146555,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-31T18:41:52.783790656Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&PodSandboxMetadata{Name:etcd-ha-080809,Uid:0533be9b0d7be6e58a747756dff0c7e6,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1717181447655207379,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.2:2379,kubernetes.io/config.hash: 0533be9b0d7be6e58a747756dff0c7e6,kubernetes.io/config.seen: 2024-05-31T18:41:33.320347488Z,kubernetes.io/config.source: file,
},RuntimeHandler:,},&PodSandbox{Id:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:11521630-b23f-4574-9eda-8d0e5d0b7e10,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717180913083012858,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"image
PullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-31T18:41:52.775694468Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&PodSandboxMetadata{Name:kindnet-fzzfx,Uid:1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717180907850788218,Labels:map[string]string{app: kindnet,controller-revision-hash: 84c66bd94d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-
31T18:41:47.524118396Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&PodSandboxMetadata{Name:kube-proxy-dmg2w,Uid:50600abe-9fae-41f8-8ff2-05c8407163c0,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1717180907841933262,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-31T18:41:47.510191433Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-080809,Uid:65d8377fef588f9e0e1afe5ebb475786,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:171718088693921222
5,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 65d8377fef588f9e0e1afe5ebb475786,kubernetes.io/config.seen: 2024-05-31T18:41:26.466114351Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=0206781d-7035-4ea2-861d-12334b4c33ba name=/runtime.v1.RuntimeService/ListPodSandbox
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.743626494Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f032090-11af-4774-b59d-b0c3f142625b name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.743683440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f032090-11af-4774-b59d-b0c3f142625b name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.744246250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717181598426220347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.
kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717181544308700139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8f
f4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39
487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9cea
ccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f032090-11af-4774-b59d-b0c3f142625b name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.768930844Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e90066c-9b50-43cc-ac34-3d5c72815b6a name=/runtime.v1.RuntimeService/Version
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.769303905Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e90066c-9b50-43cc-ac34-3d5c72815b6a name=/runtime.v1.RuntimeService/Version
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.771395337Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e266fbcb-6d69-47d1-91b0-1b07aa56acbe name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.771796891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182136771776443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e266fbcb-6d69-47d1-91b0-1b07aa56acbe name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.772592012Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=db9d71dc-170f-4be1-92e2-990e6d2a2c60 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.772647143Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=db9d71dc-170f-4be1-92e2-990e6d2a2c60 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:16 ha-080809 crio[4230]: time="2024-05-31 19:02:16.773041511Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717181598426220347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.
kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717181544308700139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8f
f4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39
487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9cea
ccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=db9d71dc-170f-4be1-92e2-990e6d2a2c60 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1290300ca030       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   33c3621daf2d9       busybox-fc5497c4f-mjhlv
	51d878400cbd7       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      8 minutes ago       Running             kube-apiserver            4                   3abc2bb414217       kube-apiserver-ha-080809
	cb38fb459f7a7       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      8 minutes ago       Running             kube-controller-manager   3                   69407998f4cac       kube-controller-manager-ha-080809
	9f8ac9229e760       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   205250b7981c7       kube-vip-ha-080809
	e769a1e959901       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   2                   86c8aa4ade84f       coredns-7db6d8ff4d-kbkpk
	02b6c1aa0696a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Running             etcd                      2                   841629e56b325       etcd-ha-080809
	2b4c89df16ec6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   2                   69407998f4cac       kube-controller-manager-ha-080809
	255424e20e7a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   2                   581a95181abc3       coredns-7db6d8ff4d-cr2wg
	4e5a80fb02f63       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            3                   3abc2bb414217       kube-apiserver-ha-080809
	b3f030d4aefd7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      11 minutes ago      Exited              coredns                   1                   4ea7aaf556078       coredns-7db6d8ff4d-cr2wg
	6ced8b7504697       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      11 minutes ago      Exited              coredns                   1                   12147db6b8bb6       coredns-7db6d8ff4d-kbkpk
	7d3a5d770ba93       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      1                   f1c2b748b778a       etcd-ha-080809
	3b39b3e6d2204       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      12 minutes ago      Exited              kindnet-cni               1                   2bdbf3a323266       kindnet-fzzfx
	6ecd8583d0646       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8bd72a9b0e002       storage-provisioner
	2442a39b94718       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      20 minutes ago      Exited              kube-proxy                0                   d23870754e4e8       kube-proxy-dmg2w
	817682d6f214f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      20 minutes ago      Exited              kube-scheduler            0                   a1254b0478464       kube-scheduler-ha-080809
	
	
	==> coredns [255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47800 - 31487 "HINFO IN 5755310607458017703.6185900873809144159. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014969161s
	
	
	==> coredns [b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:60635 - 57345 "HINFO IN 1829039879809170348.2413939385768054616. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009746833s
	
	
	==> coredns [e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54832->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54854->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54854->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54864->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-080809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_31T18_41_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:41:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:02:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:59:37 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:59:37 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:59:37 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:59:37 +0000   Fri, 31 May 2024 18:41:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-080809
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eb1bd8d3c42441893aa8670274ffb1e
	  System UUID:                0eb1bd8d-3c42-4418-93aa-8670274ffb1e
	  Boot ID:                    a71cdc12-b2dc-410e-bd89-30705a38ac97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mjhlv              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m15s
	  kube-system                 coredns-7db6d8ff4d-cr2wg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 coredns-7db6d8ff4d-kbkpk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-ha-080809                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-fzzfx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-080809             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-ha-080809    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-dmg2w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-080809             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-vip-ha-080809                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 20m    kube-proxy       
	  Normal   NodeHasSufficientPID     20m    kubelet          Node ha-080809 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  20m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m    kubelet          Node ha-080809 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m    kubelet          Node ha-080809 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m    node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   NodeReady                20m    kubelet          Node ha-080809 status is now: NodeReady
	  Normal   RegisteredNode           19m    node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   RegisteredNode           18m    node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Warning  ContainerGCFailed        10m    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           9m     node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   RegisteredNode           8m45s  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	
	
	Name:               ha-080809-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_42_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:42:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:02:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:59:01 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:59:01 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:59:01 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:59:01 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-080809-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 26fe5dde81c649a890a62a46837233ef
	  System UUID:                26fe5dde-81c6-49a8-90a6-2a46837233ef
	  Boot ID:                    2c5616c5-3a90-4e43-86f0-7a30d4779b45
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dwnsz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  default                     busybox-fc5497c4f-wqwrw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-080809-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-6jqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-080809-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-080809-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-wxq4h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-080809-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-080809-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m                     kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           19m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  NodeNotReady             16m                    node-controller  Node ha-080809-m02 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m32s (x8 over 9m33s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m32s (x8 over 9m33s)  kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m32s (x7 over 9m33s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m                     node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           8m45s                  node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	
	
	Name:               ha-080809-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_43_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:43:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-080809-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 547b87f87b134b9798b7d615ab502227
	  System UUID:                547b87f8-7b13-4b97-98b7-d615ab502227
	  Boot ID:                    8e35dd93-e6c7-4f99-9f16-d89c3fb275fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qtbxj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-080809-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-qzfbh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-080809-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-080809-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-nzlz5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-080809-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-080809-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x9 over 18m)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x7 over 18m)  kubelet          Node ha-080809-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           9m                 node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           8m45s              node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  NodeNotReady             8m20s              node-controller  Node ha-080809-m03 status is now: NodeNotReady
	
	
	Name:               ha-080809-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_44_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:44:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-080809-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4babfa9b2c834bb583245811edaf0c5d
	  System UUID:                4babfa9b-2c83-4bb5-8324-5811edaf0c5d
	  Boot ID:                    b6f38232-474d-40b0-b4c0-a2b12b8f03ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gtssc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-proxy-mzhqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node ha-080809-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-080809-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m                 node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           8m45s              node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  NodeNotReady             8m20s              node-controller  Node ha-080809-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.297552] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.055284] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057048] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.190072] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.115739] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.263739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.190448] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.324345] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.063717] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.978929] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075713] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.762573] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.211184] kauditd_printk_skb: 38 callbacks suppressed
	[May31 18:42] kauditd_printk_skb: 26 callbacks suppressed
	[May31 18:50] systemd-fstab-generator[3962]: Ignoring "noauto" option for root device
	[  +0.279335] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +0.265342] systemd-fstab-generator[4055]: Ignoring "noauto" option for root device
	[  +0.167968] systemd-fstab-generator[4067]: Ignoring "noauto" option for root device
	[  +0.358424] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	[May31 18:52] systemd-fstab-generator[4310]: Ignoring "noauto" option for root device
	[  +0.086879] kauditd_printk_skb: 183 callbacks suppressed
	[  +5.033045] kauditd_printk_skb: 66 callbacks suppressed
	[  +7.553563] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.792945] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7] <==
	{"level":"warn","ts":"2024-05-31T19:02:17.074036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.162082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.169657Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.17303Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.174488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.19165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.204179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.215253Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.219091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.222413Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.233658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.242347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.252018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.25525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.258085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.266421Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.274766Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.275749Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.286975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.290506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.293658Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.29969Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.30934Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.318541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:17.374518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> etcd [7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208] <==
	{"level":"info","ts":"2024-05-31T18:50:49.240783Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.240835Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263146Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263375Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T18:50:49.263509Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263537Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6c80de388e5020e8","initial-advertise-peer-urls":["https://192.168.39.2:2380"],"listen-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-31T18:50:49.263554Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-31T18:50:49.286001Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.286069Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-urls":["https://192.168.39.51:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.286337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 7818493287602331880)"}
	{"level":"info","ts":"2024-05-31T18:50:49.286394Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8"}
	{"level":"info","ts":"2024-05-31T18:50:49.28794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 7818493287602331880) learners=(3065880206706227886)"}
	{"level":"info","ts":"2024-05-31T18:50:49.288056Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8","added-peer-id":"2a8c31cde4a03aae","added-peer-peer-urls":["https://192.168.39.132:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.288411Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.28866Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.29168Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.314187Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.314781Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.315316Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.315418Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.31545Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.315649Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.291777Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-urls":["https://192.168.39.132:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.327174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 3065880206706227886 7818493287602331880)"}
	{"level":"info","ts":"2024-05-31T18:50:49.327314Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8"}
	
	
	==> kernel <==
	 19:02:17 up 21 min,  0 users,  load average: 0.18, 0.29, 0.31
	Linux ha-080809 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7] <==
	I0531 18:49:56.699488       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 18:49:56.699638       1 main.go:107] hostIP = 192.168.39.2
	podIP = 192.168.39.2
	I0531 18:49:56.699821       1 main.go:116] setting mtu 1500 for CNI 
	I0531 18:49:56.699938       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 18:49:56.699982       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 18:50:05.446913       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0531 18:50:19.448089       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0531 18:50:20.449642       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0531 18:50:38.351425       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0531 18:50:41.423495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f] <==
	I0531 18:52:24.743162       1 options.go:221] external host was not specified, using 192.168.39.2
	I0531 18:52:24.745581       1 server.go:148] Version: v1.30.1
	I0531 18:52:24.745654       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:52:25.227677       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0531 18:52:25.227722       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0531 18:52:25.227974       1 instance.go:299] Using reconciler: lease
	I0531 18:52:25.228488       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0531 18:52:25.228576       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0531 18:52:45.224840       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0531 18:52:45.224917       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0531 18:52:45.229261       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0531 18:52:45.229298       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2] <==
	I0531 18:53:20.405905       1 naming_controller.go:291] Starting NamingConditionController
	I0531 18:53:20.405955       1 establishing_controller.go:76] Starting EstablishingController
	I0531 18:53:20.406014       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0531 18:53:20.406061       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0531 18:53:20.406100       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0531 18:53:20.504196       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 18:53:20.505140       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0531 18:53:20.505187       1 shared_informer.go:320] Caches are synced for configmaps
	I0531 18:53:20.505233       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 18:53:20.505639       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0531 18:53:20.505670       1 policy_source.go:224] refreshing policies
	I0531 18:53:20.507353       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0531 18:53:20.507411       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0531 18:53:20.507418       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0531 18:53:20.507532       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0531 18:53:20.508084       1 aggregator.go:165] initial CRD sync complete...
	I0531 18:53:20.508127       1 autoregister_controller.go:141] Starting autoregister controller
	I0531 18:53:20.508133       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0531 18:53:20.508138       1 cache.go:39] Caches are synced for autoregister controller
	I0531 18:53:20.511670       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0531 18:53:20.586809       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:53:21.405737       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0531 18:53:21.832605       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.2 192.168.39.51]
	I0531 18:53:21.835570       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 18:53:21.843932       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70] <==
	I0531 18:52:25.112838       1 serving.go:380] Generated self-signed cert in-memory
	I0531 18:52:25.407512       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0531 18:52:25.407561       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:52:25.409457       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 18:52:25.409602       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 18:52:25.410217       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0531 18:52:25.410274       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0531 18:52:46.236432       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.2:8443/healthz\": dial tcp 192.168.39.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f] <==
	I0531 18:53:33.076776       1 shared_informer.go:320] Caches are synced for resource quota
	I0531 18:53:33.117174       1 shared_informer.go:320] Caches are synced for cronjob
	I0531 18:53:33.479648       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 18:53:33.479686       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0531 18:53:33.503918       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 18:53:57.370960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.603449ms"
	I0531 18:53:57.371228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.431µs"
	I0531 18:54:13.136301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.762736ms"
	I0531 18:54:13.137377       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.569µs"
	I0531 18:54:13.168018       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-6t2hz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-6t2hz\": the object has been modified; please apply your changes to the latest version and try again"
	I0531 18:54:13.168253       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"54a51294-0309-47fb-80d7-352a83127b17", APIVersion:"v1", ResourceVersion:"244", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-6t2hz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-6t2hz": the object has been modified; please apply your changes to the latest version and try again
	I0531 18:59:02.404430       1 taint_eviction.go:113] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-fc5497c4f-qtbxj"
	I0531 18:59:02.440378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.397µs"
	I0531 18:59:02.492972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.900509ms"
	I0531 18:59:02.566711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.669499ms"
	E0531 18:59:02.566949       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0531 18:59:02.567433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="224.866µs"
	I0531 18:59:02.573146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.344µs"
	I0531 18:59:03.898839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.59µs"
	I0531 18:59:03.905386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.619µs"
	I0531 18:59:03.917819       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.05µs"
	I0531 18:59:03.923771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.105µs"
	I0531 18:59:03.942472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.775µs"
	I0531 18:59:07.932105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.700288ms"
	I0531 18:59:07.932237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.646µs"
	
	
	==> kube-proxy [2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860] <==
	E0531 18:49:37.041507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.257750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.257924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.258008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:55.471454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:55.471631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:58.544241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:58.544552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:58.544816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:58.545008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:16.977062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:16.977207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:23.119757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:23.120054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:26.191520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:26.191634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3] <==
	W0531 18:50:37.956048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:37.956139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:38.789958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:50:38.790048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:50:39.380066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:50:39.380145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:50:39.438437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:39.438537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:39.864506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:50:39.864612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:50:39.954574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:39.954665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:40.130549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:40.130602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:40.173351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:50:40.173447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:50:40.275786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:50:40.275960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:50:40.403575       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:50:40.403680       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:50:40.405616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:50:40.405714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:50:40.695638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:50:40.695697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:50:41.623659       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 31 19:01:56 ha-080809 kubelet[1363]: E0531 19:01:56.417142    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\\\" already exists\"" pod="kube-system/kube-scheduler-ha-080809" podUID="65d8377fef588f9e0e1afe5ebb475786"
	May 31 19:01:57 ha-080809 kubelet[1363]: E0531 19:01:57.420481    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists"
	May 31 19:01:57 ha-080809 kubelet[1363]: E0531 19:01:57.420557    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:01:57 ha-080809 kubelet[1363]: E0531 19:01:57.420574    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:01:57 ha-080809 kubelet[1363]: E0531 19:01:57.420620    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\\\" already exists\"" pod="kube-system/kindnet-fzzfx" podUID="1f8aa910-a7fc-44cf-88ad-dbf14049ac1d"
	May 31 19:02:01 ha-080809 kubelet[1363]: E0531 19:02:01.420272    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists"
	May 31 19:02:01 ha-080809 kubelet[1363]: E0531 19:02:01.420501    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:02:01 ha-080809 kubelet[1363]: E0531 19:02:01.420529    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:02:01 ha-080809 kubelet[1363]: E0531 19:02:01.420709    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="11521630-b23f-4574-9eda-8d0e5d0b7e10"
	May 31 19:02:05 ha-080809 kubelet[1363]: E0531 19:02:05.422588    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists"
	May 31 19:02:05 ha-080809 kubelet[1363]: E0531 19:02:05.422655    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists" pod="kube-system/kube-proxy-dmg2w"
	May 31 19:02:05 ha-080809 kubelet[1363]: E0531 19:02:05.422672    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists" pod="kube-system/kube-proxy-dmg2w"
	May 31 19:02:05 ha-080809 kubelet[1363]: E0531 19:02:05.422723    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\\\" already exists\"" pod="kube-system/kube-proxy-dmg2w" podUID="50600abe-9fae-41f8-8ff2-05c8407163c0"
	May 31 19:02:07 ha-080809 kubelet[1363]: E0531 19:02:07.417408    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists"
	May 31 19:02:07 ha-080809 kubelet[1363]: E0531 19:02:07.417686    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:02:07 ha-080809 kubelet[1363]: E0531 19:02:07.417735    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:02:07 ha-080809 kubelet[1363]: E0531 19:02:07.417820    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\\\" already exists\"" pod="kube-system/kube-scheduler-ha-080809" podUID="65d8377fef588f9e0e1afe5ebb475786"
	May 31 19:02:08 ha-080809 kubelet[1363]: E0531 19:02:08.415801    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists"
	May 31 19:02:08 ha-080809 kubelet[1363]: E0531 19:02:08.415989    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:02:08 ha-080809 kubelet[1363]: E0531 19:02:08.416038    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:02:08 ha-080809 kubelet[1363]: E0531 19:02:08.416102    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\\\" already exists\"" pod="kube-system/kindnet-fzzfx" podUID="1f8aa910-a7fc-44cf-88ad-dbf14049ac1d"
	May 31 19:02:15 ha-080809 kubelet[1363]: E0531 19:02:15.416510    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists"
	May 31 19:02:15 ha-080809 kubelet[1363]: E0531 19:02:15.416576    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:02:15 ha-080809 kubelet[1363]: E0531 19:02:15.416597    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:02:15 ha-080809 kubelet[1363]: E0531 19:02:15.416634    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="11521630-b23f-4574-9eda-8d0e5d0b7e10"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:02:16.347704   41862 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18995-11107/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-080809 -n ha-080809
helpers_test.go:261: (dbg) Run:  kubectl --context ha-080809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (819.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (2.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 node delete m03 -v=7 --alsologtostderr: exit status 83 (131.448841ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-080809-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-080809"

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:02:18.256268   41928 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:02:18.256536   41928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:02:18.256546   41928 out.go:304] Setting ErrFile to fd 2...
	I0531 19:02:18.256551   41928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:02:18.256707   41928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:02:18.256938   41928 mustload.go:65] Loading cluster: ha-080809
	I0531 19:02:18.257322   41928 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:02:18.257687   41928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.257751   41928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.272582   41928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33529
	I0531 19:02:18.273061   41928 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.273639   41928 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.273662   41928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.274083   41928 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.274278   41928 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 19:02:18.275534   41928 host.go:66] Checking if "ha-080809" exists ...
	I0531 19:02:18.275849   41928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.275884   41928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.290448   41928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0531 19:02:18.290858   41928 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.291379   41928 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.291403   41928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.291732   41928 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.291911   41928 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:02:18.292329   41928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.292364   41928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.306144   41928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38921
	I0531 19:02:18.306538   41928 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.306951   41928 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.306969   41928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.307304   41928 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.307484   41928 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 19:02:18.308978   41928 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 19:02:18.309308   41928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.309351   41928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.323181   41928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35619
	I0531 19:02:18.323542   41928 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.323959   41928 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.323979   41928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.324268   41928 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.324466   41928 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 19:02:18.325074   41928 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.325118   41928 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.339539   41928 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38131
	I0531 19:02:18.339966   41928 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.340410   41928 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.340435   41928 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.340766   41928 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.340914   41928 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 19:02:18.344604   41928 out.go:177] * The control-plane node ha-080809-m03 host is not running: state=Stopped
	I0531 19:02:18.346168   41928 out.go:177]   To start a cluster, run: "minikube start -p ha-080809"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-linux-amd64 -p ha-080809 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 7 (486.846987ms)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080809-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:02:18.388365   41969 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:02:18.388622   41969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:02:18.388632   41969 out.go:304] Setting ErrFile to fd 2...
	I0531 19:02:18.388636   41969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:02:18.388825   41969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:02:18.389004   41969 out.go:298] Setting JSON to false
	I0531 19:02:18.389026   41969 mustload.go:65] Loading cluster: ha-080809
	I0531 19:02:18.389071   41969 notify.go:220] Checking for updates...
	I0531 19:02:18.389552   41969 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:02:18.389575   41969 status.go:255] checking status of ha-080809 ...
	I0531 19:02:18.390025   41969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.390065   41969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.409933   41969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0531 19:02:18.410282   41969 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.410874   41969 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.410913   41969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.411254   41969 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.411406   41969 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 19:02:18.413047   41969 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 19:02:18.413064   41969 host.go:66] Checking if "ha-080809" exists ...
	I0531 19:02:18.413335   41969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.413365   41969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.427952   41969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43623
	I0531 19:02:18.428320   41969 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.428789   41969 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.428811   41969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.429173   41969 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.429352   41969 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 19:02:18.432186   41969 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:02:18.432567   41969 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:02:18.432599   41969 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:02:18.432697   41969 host.go:66] Checking if "ha-080809" exists ...
	I0531 19:02:18.432970   41969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.433015   41969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.447709   41969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41173
	I0531 19:02:18.448076   41969 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.448468   41969 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.448486   41969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.448782   41969 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.448972   41969 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:02:18.449162   41969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:02:18.449188   41969 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:02:18.451568   41969 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:02:18.451901   41969 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:02:18.451932   41969 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:02:18.452104   41969 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:02:18.452249   41969 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:02:18.452357   41969 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:02:18.452508   41969 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:02:18.534270   41969 ssh_runner.go:195] Run: systemctl --version
	I0531 19:02:18.540480   41969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:02:18.555978   41969 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 19:02:18.556013   41969 api_server.go:166] Checking apiserver status ...
	I0531 19:02:18.556053   41969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:02:18.571573   41969 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5182/cgroup
	W0531 19:02:18.581934   41969 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5182/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:02:18.581967   41969 ssh_runner.go:195] Run: ls
	I0531 19:02:18.586498   41969 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 19:02:18.593302   41969 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 19:02:18.593320   41969 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 19:02:18.593328   41969 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:02:18.593342   41969 status.go:255] checking status of ha-080809-m02 ...
	I0531 19:02:18.593608   41969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.593637   41969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.608524   41969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0531 19:02:18.608989   41969 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.609439   41969 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.609460   41969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.609738   41969 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.609915   41969 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 19:02:18.611449   41969 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 19:02:18.611465   41969 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 19:02:18.611736   41969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.611766   41969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.625594   41969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40459
	I0531 19:02:18.625941   41969 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.626394   41969 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.626420   41969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.626702   41969 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.626900   41969 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 19:02:18.629747   41969 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:02:18.630165   41969 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 19:02:18.630189   41969 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:02:18.630349   41969 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 19:02:18.630700   41969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.630753   41969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.644670   41969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46341
	I0531 19:02:18.645035   41969 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.645556   41969 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.645576   41969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.645849   41969 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.646019   41969 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 19:02:18.646205   41969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:02:18.646231   41969 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 19:02:18.648905   41969 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:02:18.649319   41969 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 19:02:18.649347   41969 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:02:18.649490   41969 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 19:02:18.649651   41969 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 19:02:18.649780   41969 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 19:02:18.649897   41969 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 19:02:18.739179   41969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:02:18.761149   41969 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 19:02:18.761173   41969 api_server.go:166] Checking apiserver status ...
	I0531 19:02:18.761200   41969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:02:18.778008   41969 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0531 19:02:18.788751   41969 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:02:18.788809   41969 ssh_runner.go:195] Run: ls
	I0531 19:02:18.793792   41969 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 19:02:18.798220   41969 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0531 19:02:18.798244   41969 status.go:422] ha-080809-m02 apiserver status = Running (err=<nil>)
	I0531 19:02:18.798255   41969 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:02:18.798278   41969 status.go:255] checking status of ha-080809-m03 ...
	I0531 19:02:18.798676   41969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.798723   41969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.813049   41969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34877
	I0531 19:02:18.813462   41969 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.813889   41969 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.813934   41969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.814189   41969 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.814345   41969 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 19:02:18.815882   41969 status.go:330] ha-080809-m03 host status = "Stopped" (err=<nil>)
	I0531 19:02:18.815896   41969 status.go:343] host is not running, skipping remaining checks
	I0531 19:02:18.815903   41969 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:02:18.815927   41969 status.go:255] checking status of ha-080809-m04 ...
	I0531 19:02:18.816187   41969 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:18.816215   41969 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:18.830934   41969 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0531 19:02:18.831336   41969 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:18.831756   41969 main.go:141] libmachine: Using API Version  1
	I0531 19:02:18.831779   41969 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:18.832073   41969 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:18.832257   41969 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 19:02:18.833650   41969 status.go:330] ha-080809-m04 host status = "Stopped" (err=<nil>)
	I0531 19:02:18.833661   41969 status.go:343] host is not running, skipping remaining checks
	I0531 19:02:18.833667   41969 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-080809 -n ha-080809
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-080809 logs -n 25: (1.520029406s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m04 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp testdata/cp-test.txt                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809:/home/docker/cp-test_ha-080809-m04_ha-080809.txt                       |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809 sudo cat                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809.txt                                 |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03:/home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m03 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-080809 node stop m02 -v=7                                                     | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-080809 node start m02 -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-080809 -v=7                                                           | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-080809 -v=7                                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-080809 --wait=true -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-080809                                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	| node    | ha-080809 node delete m03 -v=7                                                   | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 18:50:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:50:40.725150   39355 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:50:40.725299   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:50:40.725310   39355 out.go:304] Setting ErrFile to fd 2...
	I0531 18:50:40.725314   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:50:40.725806   39355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:50:40.726393   39355 out.go:298] Setting JSON to false
	I0531 18:50:40.727315   39355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5587,"bootTime":1717175854,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:50:40.727373   39355 start.go:139] virtualization: kvm guest
	I0531 18:50:40.729712   39355 out.go:177] * [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:50:40.730928   39355 notify.go:220] Checking for updates...
	I0531 18:50:40.730934   39355 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 18:50:40.732260   39355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:50:40.733721   39355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:50:40.734983   39355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:50:40.736160   39355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:50:40.737350   39355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:50:40.738897   39355 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:50:40.738998   39355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 18:50:40.739474   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:50:40.739526   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:50:40.754250   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0531 18:50:40.754604   39355 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:50:40.755116   39355 main.go:141] libmachine: Using API Version  1
	I0531 18:50:40.755134   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:50:40.755455   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:50:40.755636   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.791266   39355 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 18:50:40.792482   39355 start.go:297] selected driver: kvm2
	I0531 18:50:40.792498   39355 start.go:901] validating driver "kvm2" against &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:50:40.792668   39355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:50:40.793032   39355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:50:40.793116   39355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 18:50:40.807323   39355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 18:50:40.808038   39355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:50:40.808095   39355 cni.go:84] Creating CNI manager for ""
	I0531 18:50:40.808107   39355 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 18:50:40.808161   39355 start.go:340] cluster config:
	{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:50:40.808298   39355 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:50:40.810090   39355 out.go:177] * Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	I0531 18:50:40.811542   39355 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:50:40.811577   39355 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 18:50:40.811587   39355 cache.go:56] Caching tarball of preloaded images
	I0531 18:50:40.811660   39355 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:50:40.811671   39355 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:50:40.811776   39355 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:50:40.811945   39355 start.go:360] acquireMachinesLock for ha-080809: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:50:40.811981   39355 start.go:364] duration metric: took 19.562µs to acquireMachinesLock for "ha-080809"
	I0531 18:50:40.811994   39355 start.go:96] Skipping create...Using existing machine configuration
	I0531 18:50:40.811999   39355 fix.go:54] fixHost starting: 
	I0531 18:50:40.812235   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:50:40.812263   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:50:40.825678   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I0531 18:50:40.826052   39355 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:50:40.826541   39355 main.go:141] libmachine: Using API Version  1
	I0531 18:50:40.826561   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:50:40.826941   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:50:40.827113   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.827285   39355 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:50:40.828664   39355 fix.go:112] recreateIfNeeded on ha-080809: state=Running err=<nil>
	W0531 18:50:40.828681   39355 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 18:50:40.830849   39355 out.go:177] * Updating the running kvm2 "ha-080809" VM ...
	I0531 18:50:40.832124   39355 machine.go:94] provisionDockerMachine start ...
	I0531 18:50:40.832140   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.832325   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:40.834878   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.835311   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:40.835338   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.835452   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:40.835609   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.835750   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.835893   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:40.836053   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:40.836242   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:40.836254   39355 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 18:50:40.946220   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:50:40.946244   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:40.946488   39355 buildroot.go:166] provisioning hostname "ha-080809"
	I0531 18:50:40.946518   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:40.946731   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:40.949045   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.949516   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:40.949544   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.949665   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:40.949823   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.949971   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.950075   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:40.950241   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:40.950442   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:40.950457   39355 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809 && echo "ha-080809" | sudo tee /etc/hostname
	I0531 18:50:41.073470   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:50:41.073498   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.076183   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.076574   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.076607   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.076794   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.076941   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.077115   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.077244   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.077375   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:41.077573   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:41.077596   39355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:50:41.181990   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:50:41.182021   39355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:50:41.182043   39355 buildroot.go:174] setting up certificates
	I0531 18:50:41.182054   39355 provision.go:84] configureAuth start
	I0531 18:50:41.182082   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:41.182335   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:50:41.184668   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.185020   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.185038   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.185194   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.187153   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.187566   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.187588   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.187756   39355 provision.go:143] copyHostCerts
	I0531 18:50:41.187785   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:50:41.187832   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:50:41.187841   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:50:41.187901   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:50:41.188000   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:50:41.188020   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:50:41.188027   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:50:41.188052   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:50:41.188106   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:50:41.188122   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:50:41.188128   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:50:41.188150   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:50:41.188221   39355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809 san=[127.0.0.1 192.168.39.2 ha-080809 localhost minikube]
	I0531 18:50:41.316568   39355 provision.go:177] copyRemoteCerts
	I0531 18:50:41.316621   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:50:41.316641   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.319005   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.319346   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.319370   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.319552   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.319700   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.319815   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.319912   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:41.404144   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:50:41.404210   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:50:41.433385   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:50:41.433444   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0531 18:50:41.460422   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:50:41.460475   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:50:41.492171   39355 provision.go:87] duration metric: took 310.107503ms to configureAuth
	I0531 18:50:41.492196   39355 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:50:41.492389   39355 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:50:41.492450   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.494816   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.495193   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.495231   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.495385   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.495552   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.495679   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.495809   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.495952   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:41.496153   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:41.496176   39355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:50:47.173907   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:50:47.173967   39355 machine.go:97] duration metric: took 6.341813911s to provisionDockerMachine
	I0531 18:50:47.173978   39355 start.go:293] postStartSetup for "ha-080809" (driver="kvm2")
	I0531 18:50:47.173987   39355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:50:47.174004   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.174368   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:50:47.174390   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.176840   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.177242   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.177271   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.177380   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.177589   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.177762   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.177941   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.259542   39355 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:50:47.263894   39355 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:50:47.263914   39355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:50:47.263964   39355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:50:47.264048   39355 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:50:47.264060   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:50:47.264146   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:50:47.273268   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:50:47.296570   39355 start.go:296] duration metric: took 122.58202ms for postStartSetup
	I0531 18:50:47.296610   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.296878   39355 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0531 18:50:47.296907   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.299205   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.299525   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.299561   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.299648   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.299826   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.299968   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.300101   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	W0531 18:50:47.383092   39355 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0531 18:50:47.383115   39355 fix.go:56] duration metric: took 6.571114673s for fixHost
	I0531 18:50:47.383140   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.385517   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.385862   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.385890   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.386049   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.386232   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.386383   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.386490   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.386627   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:47.386788   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:47.386798   39355 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:50:47.493714   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717181447.480513081
	
	I0531 18:50:47.493743   39355 fix.go:216] guest clock: 1717181447.480513081
	I0531 18:50:47.493763   39355 fix.go:229] Guest: 2024-05-31 18:50:47.480513081 +0000 UTC Remote: 2024-05-31 18:50:47.383123084 +0000 UTC m=+6.690893529 (delta=97.389997ms)
	I0531 18:50:47.493786   39355 fix.go:200] guest clock delta is within tolerance: 97.389997ms
	I0531 18:50:47.493793   39355 start.go:83] releasing machines lock for "ha-080809", held for 6.681803207s
	I0531 18:50:47.493817   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.494049   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:50:47.496470   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.496822   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.496850   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.496953   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497467   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497627   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497712   39355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:50:47.497764   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.497783   39355 ssh_runner.go:195] Run: cat /version.json
	I0531 18:50:47.497813   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.500034   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500151   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500362   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.500384   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500483   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.500640   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.500660   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500664   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.500827   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.500848   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.501008   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.501126   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.501254   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.501378   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.622196   39355 ssh_runner.go:195] Run: systemctl --version
	I0531 18:50:47.697116   39355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:50:48.062218   39355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:50:48.099401   39355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:50:48.099472   39355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:50:48.196165   39355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 18:50:48.196190   39355 start.go:494] detecting cgroup driver to use...
	I0531 18:50:48.196275   39355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:50:48.277871   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:50:48.348651   39355 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:50:48.348716   39355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:50:48.390576   39355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:50:48.527080   39355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:50:48.863562   39355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:50:49.104326   39355 docker.go:233] disabling docker service ...
	I0531 18:50:49.104389   39355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:50:49.125823   39355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:50:49.143789   39355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:50:49.335309   39355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:50:49.535963   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:50:49.550940   39355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:50:49.572576   39355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:50:49.572644   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.583923   39355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:50:49.583975   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.595036   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.608828   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.622691   39355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:50:49.635836   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.648354   39355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.660121   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.671040   39355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:50:49.681113   39355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:50:49.690673   39355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:50:49.873408   39355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:52:20.348721   39355 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475242436s)
	I0531 18:52:20.348763   39355 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:52:20.348839   39355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:52:20.355179   39355 start.go:562] Will wait 60s for crictl version
	I0531 18:52:20.355233   39355 ssh_runner.go:195] Run: which crictl
	I0531 18:52:20.359350   39355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:52:20.402386   39355 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:52:20.402469   39355 ssh_runner.go:195] Run: crio --version
	I0531 18:52:20.434311   39355 ssh_runner.go:195] Run: crio --version
	I0531 18:52:20.472611   39355 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:52:20.474062   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:52:20.477170   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:52:20.477634   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:52:20.477660   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:52:20.477870   39355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:52:20.483055   39355 kubeadm.go:877] updating cluster {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 18:52:20.483208   39355 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:52:20.483252   39355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:52:20.532843   39355 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:52:20.532863   39355 crio.go:433] Images already preloaded, skipping extraction
	I0531 18:52:20.532904   39355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:52:20.576108   39355 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:52:20.576137   39355 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:52:20.576147   39355 kubeadm.go:928] updating node { 192.168.39.2 8443 v1.30.1 crio true true} ...
	I0531 18:52:20.576283   39355 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:52:20.576360   39355 ssh_runner.go:195] Run: crio config
	I0531 18:52:20.622002   39355 cni.go:84] Creating CNI manager for ""
	I0531 18:52:20.622029   39355 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 18:52:20.622042   39355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 18:52:20.622076   39355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-080809 NodeName:ha-080809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:52:20.622260   39355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-080809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:52:20.622285   39355 kube-vip.go:115] generating kube-vip config ...
	I0531 18:52:20.622331   39355 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:52:20.635091   39355 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:52:20.635214   39355 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:52:20.635270   39355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:52:20.645725   39355 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:52:20.645808   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0531 18:52:20.655828   39355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0531 18:52:20.672625   39355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:52:20.690003   39355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0531 18:52:20.706882   39355 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 18:52:20.725424   39355 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:52:20.732286   39355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:52:20.887007   39355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:52:20.902911   39355 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.2
	I0531 18:52:20.902934   39355 certs.go:194] generating shared ca certs ...
	I0531 18:52:20.902958   39355 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:20.903128   39355 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:52:20.903167   39355 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:52:20.903177   39355 certs.go:256] generating profile certs ...
	I0531 18:52:20.903257   39355 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:52:20.903294   39355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091
	I0531 18:52:20.903308   39355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.51 192.168.39.132 192.168.39.254]
	I0531 18:52:21.121858   39355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 ...
	I0531 18:52:21.121891   39355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091: {Name:mk82bb224e9d3af07dce0b1684abbe8baa7e4f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:21.122065   39355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091 ...
	I0531 18:52:21.122077   39355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091: {Name:mk76a457401d3f1d1b72821102855c52f0a5de63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:21.122151   39355 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:52:21.122297   39355 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:52:21.122420   39355 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:52:21.122434   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:52:21.122446   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:52:21.122459   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:52:21.122469   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:52:21.122482   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:52:21.122494   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:52:21.122505   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:52:21.122517   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:52:21.122566   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:52:21.122594   39355 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:52:21.122603   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:52:21.122625   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:52:21.122649   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:52:21.122670   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:52:21.122704   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:52:21.122728   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.122742   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.122754   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.123310   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:52:21.148456   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:52:21.172795   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:52:21.197175   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:52:21.221688   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0531 18:52:21.245580   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:52:21.269278   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:52:21.292738   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:52:21.316727   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:52:21.340739   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:52:21.363482   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:52:21.386799   39355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:52:21.403566   39355 ssh_runner.go:195] Run: openssl version
	I0531 18:52:21.410046   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:52:21.422130   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.427059   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.427130   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.433755   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:52:21.444010   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:52:21.455312   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.459971   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.460018   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.465518   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:52:21.475141   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:52:21.486477   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.491035   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.491075   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.497086   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:52:21.506659   39355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:52:21.511111   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 18:52:21.516686   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 18:52:21.522265   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 18:52:21.527836   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 18:52:21.533354   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 18:52:21.539272   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 18:52:21.544898   39355 kubeadm.go:391] StartCluster: {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:52:21.545057   39355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:52:21.545119   39355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:52:21.593095   39355 cri.go:89] found id: "b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e"
	I0531 18:52:21.593117   39355 cri.go:89] found id: "6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89"
	I0531 18:52:21.593121   39355 cri.go:89] found id: "4ddbab57592d0103bc6815778479db8a6144425de981bd8791af83d84165980a"
	I0531 18:52:21.593124   39355 cri.go:89] found id: "7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208"
	I0531 18:52:21.593126   39355 cri.go:89] found id: "02c77070fec86c68c73045dc1e91d4efbc9b5849cc419eacc4885001a015d503"
	I0531 18:52:21.593129   39355 cri.go:89] found id: "d110c3c293be8dd490b1b1832bc510e0413fbcd04faeaf6434250ec703f1fa63"
	I0531 18:52:21.593132   39355 cri.go:89] found id: "3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7"
	I0531 18:52:21.593144   39355 cri.go:89] found id: "6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783"
	I0531 18:52:21.593148   39355 cri.go:89] found id: "dfee637100fd1d000d553477c129ce4eeb3b6cd89076f634caaf1544c2c4474b"
	I0531 18:52:21.593155   39355 cri.go:89] found id: "2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492"
	I0531 18:52:21.593163   39355 cri.go:89] found id: "54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679"
	I0531 18:52:21.593167   39355 cri.go:89] found id: "2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860"
	I0531 18:52:21.593171   39355 cri.go:89] found id: "817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3"
	I0531 18:52:21.593176   39355 cri.go:89] found id: "588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e"
	I0531 18:52:21.593182   39355 cri.go:89] found id: "0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801"
	I0531 18:52:21.593188   39355 cri.go:89] found id: ""
	I0531 18:52:21.593238   39355 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.433502502Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182139433483446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7143c590-f29a-4ca4-87d7-22240dd4738e name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.433970806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e54716f5-cb00-407b-b12a-bde52d159f77 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.434039350Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e54716f5-cb00-407b-b12a-bde52d159f77 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.434336211Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717181598426220347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.
kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717181544308700139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8f
f4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39
487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9cea
ccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e54716f5-cb00-407b-b12a-bde52d159f77 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.474585459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f4105d1d-2bad-41d4-8948-63895ce3e147 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.474680720Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f4105d1d-2bad-41d4-8948-63895ce3e147 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.475950222Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4b9785cc-f7b6-49fa-8139-29b9b5cccc79 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.476393084Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182139476371716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4b9785cc-f7b6-49fa-8139-29b9b5cccc79 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.477031087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=36c8b37f-2591-477b-9409-cbc3240d4c48 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.477103940Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=36c8b37f-2591-477b-9409-cbc3240d4c48 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.477433917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717181598426220347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.
kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717181544308700139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8f
f4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39
487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9cea
ccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=36c8b37f-2591-477b-9409-cbc3240d4c48 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.516593974Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c780d825-665a-4537-9bbb-86e0f77f1fd4 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.516681387Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c780d825-665a-4537-9bbb-86e0f77f1fd4 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.518466192Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e0e42e7-e583-402b-90ce-534eee77291b name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.519085505Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182139518952079,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e0e42e7-e583-402b-90ce-534eee77291b name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.519778621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f6c95905-c75c-4c4b-8a35-8a39825a13cd name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.519900378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f6c95905-c75c-4c4b-8a35-8a39825a13cd name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.520288061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717181598426220347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.
kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717181544308700139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8f
f4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39
487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9cea
ccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f6c95905-c75c-4c4b-8a35-8a39825a13cd name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.563994908Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa7840cc-6b62-4c15-9711-7ed461e91073 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.564095368Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa7840cc-6b62-4c15-9711-7ed461e91073 name=/runtime.v1.RuntimeService/Version
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.564804927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6109fcf4-cee1-49d6-80fb-048600355939 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.565335344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182139565311001,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6109fcf4-cee1-49d6-80fb-048600355939 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.565785768Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c26c5ec-17c9-4217-92ca-e838ab55493f name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.565905696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c26c5ec-17c9-4217-92ca-e838ab55493f name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:02:19 ha-080809 crio[4230]: time="2024-05-31 19:02:19.566226644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:4,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717181598426220347,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 4,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"p
rotocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.
kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]strin
g{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717181544308700139,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.
kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8f
f4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39
487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAIN
ER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9cea
ccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2
fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotati
ons:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSp
ecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c26c5ec-17c9-4217-92ca-e838ab55493f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b1290300ca030       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   33c3621daf2d9       busybox-fc5497c4f-mjhlv
	51d878400cbd7       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Running             kube-apiserver            4                   3abc2bb414217       kube-apiserver-ha-080809
	cb38fb459f7a7       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Running             kube-controller-manager   3                   69407998f4cac       kube-controller-manager-ha-080809
	9f8ac9229e760       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      9 minutes ago       Running             kube-vip                  0                   205250b7981c7       kube-vip-ha-080809
	e769a1e959901       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   2                   86c8aa4ade84f       coredns-7db6d8ff4d-kbkpk
	02b6c1aa0696a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Running             etcd                      2                   841629e56b325       etcd-ha-080809
	2b4c89df16ec6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   2                   69407998f4cac       kube-controller-manager-ha-080809
	255424e20e7a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Running             coredns                   2                   581a95181abc3       coredns-7db6d8ff4d-cr2wg
	4e5a80fb02f63       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            3                   3abc2bb414217       kube-apiserver-ha-080809
	b3f030d4aefd7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      11 minutes ago      Exited              coredns                   1                   4ea7aaf556078       coredns-7db6d8ff4d-cr2wg
	6ced8b7504697       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      11 minutes ago      Exited              coredns                   1                   12147db6b8bb6       coredns-7db6d8ff4d-kbkpk
	7d3a5d770ba93       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      11 minutes ago      Exited              etcd                      1                   f1c2b748b778a       etcd-ha-080809
	3b39b3e6d2204       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      12 minutes ago      Exited              kindnet-cni               1                   2bdbf3a323266       kindnet-fzzfx
	6ecd8583d0646       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8bd72a9b0e002       storage-provisioner
	2442a39b94718       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      20 minutes ago      Exited              kube-proxy                0                   d23870754e4e8       kube-proxy-dmg2w
	817682d6f214f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      20 minutes ago      Exited              kube-scheduler            0                   a1254b0478464       kube-scheduler-ha-080809
	
	
	==> coredns [255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47800 - 31487 "HINFO IN 5755310607458017703.6185900873809144159. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014969161s
	
	
	==> coredns [b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:60635 - 57345 "HINFO IN 1829039879809170348.2413939385768054616. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009746833s
	
	
	==> coredns [e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54832->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54854->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54854->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54864->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.7:54864->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-080809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_31T18_41_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:41:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:02:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:59:37 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:59:37 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:59:37 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:59:37 +0000   Fri, 31 May 2024 18:41:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-080809
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eb1bd8d3c42441893aa8670274ffb1e
	  System UUID:                0eb1bd8d-3c42-4418-93aa-8670274ffb1e
	  Boot ID:                    a71cdc12-b2dc-410e-bd89-30705a38ac97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mjhlv              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 coredns-7db6d8ff4d-cr2wg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 coredns-7db6d8ff4d-kbkpk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-ha-080809                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-fzzfx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-ha-080809             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-ha-080809    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-dmg2w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-ha-080809             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-vip-ha-080809                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m28s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 20m    kube-proxy       
	  Normal   NodeHasSufficientPID     20m    kubelet          Node ha-080809 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  20m    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m    kubelet          Node ha-080809 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m    kubelet          Node ha-080809 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           20m    node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   NodeReady                20m    kubelet          Node ha-080809 status is now: NodeReady
	  Normal   RegisteredNode           19m    node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   RegisteredNode           18m    node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Warning  ContainerGCFailed        10m    kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           9m2s   node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   RegisteredNode           8m47s  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	
	
	Name:               ha-080809-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_42_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:42:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:02:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 18:59:01 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 18:59:01 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 18:59:01 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 18:59:01 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-080809-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 26fe5dde81c649a890a62a46837233ef
	  System UUID:                26fe5dde-81c6-49a8-90a6-2a46837233ef
	  Boot ID:                    2c5616c5-3a90-4e43-86f0-7a30d4779b45
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dwnsz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  default                     busybox-fc5497c4f-wqwrw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-080809-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-6jqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-080809-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-080809-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-wxq4h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-080809-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-080809-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           19m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           18m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  NodeNotReady             16m                    node-controller  Node ha-080809-m02 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m34s (x8 over 9m35s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m34s (x8 over 9m35s)  kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m34s (x7 over 9m35s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m2s                   node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           8m47s                  node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	
	
	Name:               ha-080809-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_43_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:43:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-080809-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 547b87f87b134b9798b7d615ab502227
	  System UUID:                547b87f8-7b13-4b97-98b7-d615ab502227
	  Boot ID:                    8e35dd93-e6c7-4f99-9f16-d89c3fb275fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qtbxj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 etcd-ha-080809-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-qzfbh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-080809-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-080809-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-nzlz5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-080809-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-080809-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x9 over 18m)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x7 over 18m)  kubelet          Node ha-080809-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           9m3s               node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           8m48s              node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  NodeNotReady             8m23s              node-controller  Node ha-080809-m03 status is now: NodeNotReady
	
	
	Name:               ha-080809-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_44_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:44:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-080809-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4babfa9b2c834bb583245811edaf0c5d
	  System UUID:                4babfa9b-2c83-4bb5-8324-5811edaf0c5d
	  Boot ID:                    b6f38232-474d-40b0-b4c0-a2b12b8f03ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gtssc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-proxy-mzhqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x2 over 17m)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x2 over 17m)  kubelet          Node ha-080809-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x2 over 17m)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  NodeReady                17m                kubelet          Node ha-080809-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m3s               node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           8m48s              node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  NodeNotReady             8m23s              node-controller  Node ha-080809-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.297552] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.055284] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057048] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.190072] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.115739] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.263739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.190448] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.324345] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.063717] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.978929] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075713] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.762573] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.211184] kauditd_printk_skb: 38 callbacks suppressed
	[May31 18:42] kauditd_printk_skb: 26 callbacks suppressed
	[May31 18:50] systemd-fstab-generator[3962]: Ignoring "noauto" option for root device
	[  +0.279335] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +0.265342] systemd-fstab-generator[4055]: Ignoring "noauto" option for root device
	[  +0.167968] systemd-fstab-generator[4067]: Ignoring "noauto" option for root device
	[  +0.358424] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	[May31 18:52] systemd-fstab-generator[4310]: Ignoring "noauto" option for root device
	[  +0.086879] kauditd_printk_skb: 183 callbacks suppressed
	[  +5.033045] kauditd_printk_skb: 66 callbacks suppressed
	[  +7.553563] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.792945] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7] <==
	{"level":"warn","ts":"2024-05-31T19:02:19.914337Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.917585Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.931186Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.940644Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.950547Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.955825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.959423Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.968792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.97373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.979123Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.989129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.992118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:19.995056Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.002767Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2a8c31cde4a03aae","rtt":"0s","error":"dial tcp 192.168.39.132:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-31T19:02:20.00358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.006373Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2a8c31cde4a03aae","rtt":"0s","error":"dial tcp 192.168.39.132:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-31T19:02:20.012234Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.02122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.024521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.027495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.03279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.043Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.073944Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.087075Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:02:20.100029Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> etcd [7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208] <==
	{"level":"info","ts":"2024-05-31T18:50:49.240783Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.240835Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263146Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263375Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T18:50:49.263509Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263537Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6c80de388e5020e8","initial-advertise-peer-urls":["https://192.168.39.2:2380"],"listen-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-31T18:50:49.263554Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-31T18:50:49.286001Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.286069Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-urls":["https://192.168.39.51:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.286337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 7818493287602331880)"}
	{"level":"info","ts":"2024-05-31T18:50:49.286394Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8"}
	{"level":"info","ts":"2024-05-31T18:50:49.28794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 7818493287602331880) learners=(3065880206706227886)"}
	{"level":"info","ts":"2024-05-31T18:50:49.288056Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8","added-peer-id":"2a8c31cde4a03aae","added-peer-peer-urls":["https://192.168.39.132:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.288411Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.28866Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.29168Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.314187Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.314781Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.315316Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.315418Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.31545Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.315649Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.291777Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-urls":["https://192.168.39.132:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.327174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 3065880206706227886 7818493287602331880)"}
	{"level":"info","ts":"2024-05-31T18:50:49.327314Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8"}
	
	
	==> kernel <==
	 19:02:20 up 21 min,  0 users,  load average: 0.16, 0.29, 0.30
	Linux ha-080809 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7] <==
	I0531 18:49:56.699488       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 18:49:56.699638       1 main.go:107] hostIP = 192.168.39.2
	podIP = 192.168.39.2
	I0531 18:49:56.699821       1 main.go:116] setting mtu 1500 for CNI 
	I0531 18:49:56.699938       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 18:49:56.699982       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 18:50:05.446913       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0531 18:50:19.448089       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0531 18:50:20.449642       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0531 18:50:38.351425       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0531 18:50:41.423495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [4e5a80fb02f63c3daae46599d52a2b983d9eab50c5d50c232bf54a9c034c726f] <==
	I0531 18:52:24.743162       1 options.go:221] external host was not specified, using 192.168.39.2
	I0531 18:52:24.745581       1 server.go:148] Version: v1.30.1
	I0531 18:52:24.745654       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:52:25.227677       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0531 18:52:25.227722       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0531 18:52:25.227974       1 instance.go:299] Using reconciler: lease
	I0531 18:52:25.228488       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0531 18:52:25.228576       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	W0531 18:52:45.224840       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0531 18:52:45.224917       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0531 18:52:45.229261       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0531 18:52:45.229298       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [51d878400cbd7acc87f31894b7d277e0a57858c696d7c4f94ee57c409e2558c2] <==
	I0531 18:53:20.405905       1 naming_controller.go:291] Starting NamingConditionController
	I0531 18:53:20.405955       1 establishing_controller.go:76] Starting EstablishingController
	I0531 18:53:20.406014       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0531 18:53:20.406061       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0531 18:53:20.406100       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0531 18:53:20.504196       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 18:53:20.505140       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0531 18:53:20.505187       1 shared_informer.go:320] Caches are synced for configmaps
	I0531 18:53:20.505233       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 18:53:20.505639       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0531 18:53:20.505670       1 policy_source.go:224] refreshing policies
	I0531 18:53:20.507353       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0531 18:53:20.507411       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0531 18:53:20.507418       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0531 18:53:20.507532       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0531 18:53:20.508084       1 aggregator.go:165] initial CRD sync complete...
	I0531 18:53:20.508127       1 autoregister_controller.go:141] Starting autoregister controller
	I0531 18:53:20.508133       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0531 18:53:20.508138       1 cache.go:39] Caches are synced for autoregister controller
	I0531 18:53:20.511670       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0531 18:53:20.586809       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 18:53:21.405737       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0531 18:53:21.832605       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.2 192.168.39.51]
	I0531 18:53:21.835570       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 18:53:21.843932       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70] <==
	I0531 18:52:25.112838       1 serving.go:380] Generated self-signed cert in-memory
	I0531 18:52:25.407512       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0531 18:52:25.407561       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:52:25.409457       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 18:52:25.409602       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 18:52:25.410217       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0531 18:52:25.410274       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0531 18:52:46.236432       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.2:8443/healthz\": dial tcp 192.168.39.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f] <==
	I0531 18:53:33.076776       1 shared_informer.go:320] Caches are synced for resource quota
	I0531 18:53:33.117174       1 shared_informer.go:320] Caches are synced for cronjob
	I0531 18:53:33.479648       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 18:53:33.479686       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0531 18:53:33.503918       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 18:53:57.370960       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.603449ms"
	I0531 18:53:57.371228       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.431µs"
	I0531 18:54:13.136301       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="19.762736ms"
	I0531 18:54:13.137377       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.569µs"
	I0531 18:54:13.168018       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-6t2hz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-6t2hz\": the object has been modified; please apply your changes to the latest version and try again"
	I0531 18:54:13.168253       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"54a51294-0309-47fb-80d7-352a83127b17", APIVersion:"v1", ResourceVersion:"244", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-6t2hz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-6t2hz": the object has been modified; please apply your changes to the latest version and try again
	I0531 18:59:02.404430       1 taint_eviction.go:113] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-fc5497c4f-qtbxj"
	I0531 18:59:02.440378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.397µs"
	I0531 18:59:02.492972       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.900509ms"
	I0531 18:59:02.566711       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.669499ms"
	E0531 18:59:02.566949       1 replica_set.go:557] sync "default/busybox-fc5497c4f" failed with Operation cannot be fulfilled on replicasets.apps "busybox-fc5497c4f": the object has been modified; please apply your changes to the latest version and try again
	I0531 18:59:02.567433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="224.866µs"
	I0531 18:59:02.573146       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.344µs"
	I0531 18:59:03.898839       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.59µs"
	I0531 18:59:03.905386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.619µs"
	I0531 18:59:03.917819       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.05µs"
	I0531 18:59:03.923771       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.105µs"
	I0531 18:59:03.942472       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.775µs"
	I0531 18:59:07.932105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.700288ms"
	I0531 18:59:07.932237       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.646µs"
	
	
	==> kube-proxy [2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860] <==
	E0531 18:49:37.041507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.257750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.257924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.258008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:55.471454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:55.471631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:58.544241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:58.544552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:58.544816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:58.545008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:16.977062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:16.977207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:23.119757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:23.120054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:26.191520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:26.191634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3] <==
	W0531 18:50:37.956048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:37.956139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:38.789958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:50:38.790048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:50:39.380066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:50:39.380145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:50:39.438437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:39.438537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:39.864506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:50:39.864612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:50:39.954574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:39.954665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:40.130549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:40.130602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:40.173351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:50:40.173447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:50:40.275786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:50:40.275960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:50:40.403575       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:50:40.403680       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:50:40.405616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:50:40.405714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:50:40.695638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:50:40.695697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:50:41.623659       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 31 19:02:05 ha-080809 kubelet[1363]: E0531 19:02:05.422723    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\\\" already exists\"" pod="kube-system/kube-proxy-dmg2w" podUID="50600abe-9fae-41f8-8ff2-05c8407163c0"
	May 31 19:02:07 ha-080809 kubelet[1363]: E0531 19:02:07.417408    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists"
	May 31 19:02:07 ha-080809 kubelet[1363]: E0531 19:02:07.417686    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:02:07 ha-080809 kubelet[1363]: E0531 19:02:07.417735    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:02:07 ha-080809 kubelet[1363]: E0531 19:02:07.417820    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\\\" already exists\"" pod="kube-system/kube-scheduler-ha-080809" podUID="65d8377fef588f9e0e1afe5ebb475786"
	May 31 19:02:08 ha-080809 kubelet[1363]: E0531 19:02:08.415801    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists"
	May 31 19:02:08 ha-080809 kubelet[1363]: E0531 19:02:08.415989    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:02:08 ha-080809 kubelet[1363]: E0531 19:02:08.416038    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:02:08 ha-080809 kubelet[1363]: E0531 19:02:08.416102    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\\\" already exists\"" pod="kube-system/kindnet-fzzfx" podUID="1f8aa910-a7fc-44cf-88ad-dbf14049ac1d"
	May 31 19:02:15 ha-080809 kubelet[1363]: E0531 19:02:15.416510    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists"
	May 31 19:02:15 ha-080809 kubelet[1363]: E0531 19:02:15.416576    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:02:15 ha-080809 kubelet[1363]: E0531 19:02:15.416597    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:02:15 ha-080809 kubelet[1363]: E0531 19:02:15.416634    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="11521630-b23f-4574-9eda-8d0e5d0b7e10"
	May 31 19:02:19 ha-080809 kubelet[1363]: E0531 19:02:19.418522    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists"
	May 31 19:02:19 ha-080809 kubelet[1363]: E0531 19:02:19.418632    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists" pod="kube-system/kube-proxy-dmg2w"
	May 31 19:02:19 ha-080809 kubelet[1363]: E0531 19:02:19.419367    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists" pod="kube-system/kube-proxy-dmg2w"
	May 31 19:02:19 ha-080809 kubelet[1363]: E0531 19:02:19.422117    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\\\" already exists\"" pod="kube-system/kube-proxy-dmg2w" podUID="50600abe-9fae-41f8-8ff2-05c8407163c0"
	May 31 19:02:19 ha-080809 kubelet[1363]: E0531 19:02:19.422261    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists"
	May 31 19:02:19 ha-080809 kubelet[1363]: E0531 19:02:19.422291    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:02:19 ha-080809 kubelet[1363]: E0531 19:02:19.422311    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:02:19 ha-080809 kubelet[1363]: E0531 19:02:19.422340    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\\\" already exists\"" pod="kube-system/kube-scheduler-ha-080809" podUID="65d8377fef588f9e0e1afe5ebb475786"
	May 31 19:02:20 ha-080809 kubelet[1363]: E0531 19:02:20.416307    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists"
	May 31 19:02:20 ha-080809 kubelet[1363]: E0531 19:02:20.416351    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:02:20 ha-080809 kubelet[1363]: E0531 19:02:20.416368    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:02:20 ha-080809 kubelet[1363]: E0531 19:02:20.416396    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\\\" already exists\"" pod="kube-system/kindnet-fzzfx" podUID="1f8aa910-a7fc-44cf-88ad-dbf14049ac1d"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:02:19.130286   42075 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18995-11107/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-080809 -n ha-080809
helpers_test.go:261: (dbg) Run:  kubectl --context ha-080809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (2.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (174.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 stop -v=7 --alsologtostderr: exit status 82 (2m3.781633593s)

                                                
                                                
-- stdout --
	* Stopping node "ha-080809-m04"  ...
	* Stopping node "ha-080809-m03"  ...
	* Stopping node "ha-080809-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:02:21.376822   42193 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:02:21.377082   42193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:02:21.377092   42193 out.go:304] Setting ErrFile to fd 2...
	I0531 19:02:21.377096   42193 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:02:21.377253   42193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:02:21.377453   42193 out.go:298] Setting JSON to false
	I0531 19:02:21.377520   42193 mustload.go:65] Loading cluster: ha-080809
	I0531 19:02:21.377831   42193 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:02:21.377959   42193 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 19:02:21.378154   42193 mustload.go:65] Loading cluster: ha-080809
	I0531 19:02:21.378299   42193 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:02:21.378329   42193 stop.go:39] StopHost: ha-080809-m04
	I0531 19:02:21.378699   42193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:21.378737   42193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:21.392601   42193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37459
	I0531 19:02:21.393099   42193 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:21.393675   42193 main.go:141] libmachine: Using API Version  1
	I0531 19:02:21.393701   42193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:21.394026   42193 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:21.396376   42193 out.go:177] * Stopping node "ha-080809-m04"  ...
	I0531 19:02:21.397621   42193 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0531 19:02:21.397644   42193 main.go:141] libmachine: (ha-080809-m04) Calling .DriverName
	I0531 19:02:21.397876   42193 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0531 19:02:21.397901   42193 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 19:02:21.399535   42193 retry.go:31] will retry after 150.24201ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:21.550880   42193 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 19:02:21.552394   42193 retry.go:31] will retry after 416.096816ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:21.968938   42193 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 19:02:21.970512   42193 retry.go:31] will retry after 635.029204ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:22.605817   42193 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	I0531 19:02:22.607343   42193 retry.go:31] will retry after 637.426611ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:23.245043   42193 main.go:141] libmachine: (ha-080809-m04) Calling .GetSSHHostname
	W0531 19:02:23.246432   42193 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:23.246470   42193 main.go:141] libmachine: Stopping "ha-080809-m04"...
	I0531 19:02:23.246480   42193 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 19:02:23.247629   42193 stop.go:66] stop err: Machine "ha-080809-m04" is already stopped.
	I0531 19:02:23.247669   42193 stop.go:69] host is already stopped
	I0531 19:02:23.247683   42193 stop.go:39] StopHost: ha-080809-m03
	I0531 19:02:23.247999   42193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:23.248052   42193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:23.262784   42193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45933
	I0531 19:02:23.263140   42193 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:23.263602   42193 main.go:141] libmachine: Using API Version  1
	I0531 19:02:23.263627   42193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:23.264000   42193 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:23.265937   42193 out.go:177] * Stopping node "ha-080809-m03"  ...
	I0531 19:02:23.267280   42193 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0531 19:02:23.267309   42193 main.go:141] libmachine: (ha-080809-m03) Calling .DriverName
	I0531 19:02:23.267529   42193 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0531 19:02:23.267553   42193 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 19:02:23.268910   42193 retry.go:31] will retry after 259.829225ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:23.529400   42193 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 19:02:23.530894   42193 retry.go:31] will retry after 473.565792ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:24.005565   42193 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	I0531 19:02:24.006984   42193 retry.go:31] will retry after 689.466391ms: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:24.696802   42193 main.go:141] libmachine: (ha-080809-m03) Calling .GetSSHHostname
	W0531 19:02:24.698391   42193 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh host name for driver: host is not running
	I0531 19:02:24.698420   42193 main.go:141] libmachine: Stopping "ha-080809-m03"...
	I0531 19:02:24.698428   42193 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 19:02:24.700065   42193 stop.go:66] stop err: Machine "ha-080809-m03" is already stopped.
	I0531 19:02:24.700086   42193 stop.go:69] host is already stopped
	I0531 19:02:24.700100   42193 stop.go:39] StopHost: ha-080809-m02
	I0531 19:02:24.700515   42193 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:02:24.700567   42193 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:02:24.715668   42193 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39751
	I0531 19:02:24.716117   42193 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:02:24.716657   42193 main.go:141] libmachine: Using API Version  1
	I0531 19:02:24.716678   42193 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:02:24.717017   42193 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:02:24.719722   42193 out.go:177] * Stopping node "ha-080809-m02"  ...
	I0531 19:02:24.721111   42193 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0531 19:02:24.721138   42193 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 19:02:24.721366   42193 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0531 19:02:24.721388   42193 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 19:02:24.723767   42193 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:02:24.724142   42193 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 19:02:24.724177   42193 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:02:24.724329   42193 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 19:02:24.724523   42193 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 19:02:24.724681   42193 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 19:02:24.724814   42193 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	I0531 19:02:24.811718   42193 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0531 19:02:24.865124   42193 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0531 19:02:24.925490   42193 main.go:141] libmachine: Stopping "ha-080809-m02"...
	I0531 19:02:24.925518   42193 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 19:02:24.926954   42193 main.go:141] libmachine: (ha-080809-m02) Calling .Stop
	I0531 19:02:24.930644   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 0/120
	I0531 19:02:25.932041   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 1/120
	I0531 19:02:26.933211   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 2/120
	I0531 19:02:27.934487   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 3/120
	I0531 19:02:28.935969   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 4/120
	I0531 19:02:29.937589   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 5/120
	I0531 19:02:30.938689   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 6/120
	I0531 19:02:31.940379   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 7/120
	I0531 19:02:32.942039   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 8/120
	I0531 19:02:33.943387   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 9/120
	I0531 19:02:34.945197   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 10/120
	I0531 19:02:35.946549   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 11/120
	I0531 19:02:36.947792   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 12/120
	I0531 19:02:37.949508   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 13/120
	I0531 19:02:38.951402   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 14/120
	I0531 19:02:39.953050   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 15/120
	I0531 19:02:40.954299   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 16/120
	I0531 19:02:41.955773   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 17/120
	I0531 19:02:42.957294   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 18/120
	I0531 19:02:43.958684   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 19/120
	I0531 19:02:44.960429   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 20/120
	I0531 19:02:45.961703   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 21/120
	I0531 19:02:46.963326   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 22/120
	I0531 19:02:47.964522   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 23/120
	I0531 19:02:48.965995   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 24/120
	I0531 19:02:49.967702   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 25/120
	I0531 19:02:50.969256   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 26/120
	I0531 19:02:51.971346   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 27/120
	I0531 19:02:52.972832   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 28/120
	I0531 19:02:53.974206   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 29/120
	I0531 19:02:54.976206   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 30/120
	I0531 19:02:55.977793   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 31/120
	I0531 19:02:56.979456   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 32/120
	I0531 19:02:57.980915   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 33/120
	I0531 19:02:58.982289   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 34/120
	I0531 19:02:59.983613   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 35/120
	I0531 19:03:00.985012   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 36/120
	I0531 19:03:01.986334   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 37/120
	I0531 19:03:02.987787   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 38/120
	I0531 19:03:03.989155   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 39/120
	I0531 19:03:04.990858   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 40/120
	I0531 19:03:05.992162   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 41/120
	I0531 19:03:06.993528   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 42/120
	I0531 19:03:07.995568   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 43/120
	I0531 19:03:08.997293   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 44/120
	I0531 19:03:09.999021   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 45/120
	I0531 19:03:11.000553   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 46/120
	I0531 19:03:12.001865   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 47/120
	I0531 19:03:13.003218   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 48/120
	I0531 19:03:14.004651   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 49/120
	I0531 19:03:15.006588   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 50/120
	I0531 19:03:16.007838   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 51/120
	I0531 19:03:17.009200   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 52/120
	I0531 19:03:18.010640   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 53/120
	I0531 19:03:19.012001   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 54/120
	I0531 19:03:20.013754   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 55/120
	I0531 19:03:21.015068   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 56/120
	I0531 19:03:22.016575   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 57/120
	I0531 19:03:23.017992   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 58/120
	I0531 19:03:24.019434   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 59/120
	I0531 19:03:25.021352   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 60/120
	I0531 19:03:26.023422   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 61/120
	I0531 19:03:27.024750   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 62/120
	I0531 19:03:28.026079   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 63/120
	I0531 19:03:29.027345   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 64/120
	I0531 19:03:30.029102   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 65/120
	I0531 19:03:31.031396   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 66/120
	I0531 19:03:32.033364   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 67/120
	I0531 19:03:33.035008   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 68/120
	I0531 19:03:34.036328   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 69/120
	I0531 19:03:35.037857   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 70/120
	I0531 19:03:36.039290   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 71/120
	I0531 19:03:37.041034   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 72/120
	I0531 19:03:38.042175   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 73/120
	I0531 19:03:39.043421   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 74/120
	I0531 19:03:40.044778   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 75/120
	I0531 19:03:41.046196   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 76/120
	I0531 19:03:42.047580   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 77/120
	I0531 19:03:43.049077   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 78/120
	I0531 19:03:44.050281   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 79/120
	I0531 19:03:45.051938   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 80/120
	I0531 19:03:46.053324   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 81/120
	I0531 19:03:47.054763   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 82/120
	I0531 19:03:48.055916   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 83/120
	I0531 19:03:49.057171   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 84/120
	I0531 19:03:50.058626   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 85/120
	I0531 19:03:51.060369   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 86/120
	I0531 19:03:52.061887   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 87/120
	I0531 19:03:53.063760   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 88/120
	I0531 19:03:54.065017   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 89/120
	I0531 19:03:55.067114   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 90/120
	I0531 19:03:56.068504   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 91/120
	I0531 19:03:57.069775   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 92/120
	I0531 19:03:58.071116   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 93/120
	I0531 19:03:59.072400   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 94/120
	I0531 19:04:00.074130   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 95/120
	I0531 19:04:01.075463   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 96/120
	I0531 19:04:02.076804   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 97/120
	I0531 19:04:03.078051   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 98/120
	I0531 19:04:04.079367   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 99/120
	I0531 19:04:05.081204   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 100/120
	I0531 19:04:06.083223   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 101/120
	I0531 19:04:07.084560   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 102/120
	I0531 19:04:08.085760   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 103/120
	I0531 19:04:09.087131   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 104/120
	I0531 19:04:10.088794   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 105/120
	I0531 19:04:11.089986   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 106/120
	I0531 19:04:12.091315   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 107/120
	I0531 19:04:13.092601   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 108/120
	I0531 19:04:14.093828   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 109/120
	I0531 19:04:15.095349   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 110/120
	I0531 19:04:16.097542   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 111/120
	I0531 19:04:17.098904   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 112/120
	I0531 19:04:18.100375   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 113/120
	I0531 19:04:19.101651   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 114/120
	I0531 19:04:20.103360   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 115/120
	I0531 19:04:21.105186   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 116/120
	I0531 19:04:22.106590   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 117/120
	I0531 19:04:23.107871   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 118/120
	I0531 19:04:24.109263   42193 main.go:141] libmachine: (ha-080809-m02) Waiting for machine to stop 119/120
	I0531 19:04:25.110118   42193 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0531 19:04:25.110189   42193 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0531 19:04:25.112522   42193 out.go:177] 
	W0531 19:04:25.113941   42193 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0531 19:04:25.113967   42193 out.go:239] * 
	* 
	W0531 19:04:25.116518   42193 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:04:25.117968   42193 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-080809 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr: exit status 7 (33.503416616s)

                                                
                                                
-- stdout --
	ha-080809
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-080809-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-080809-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-080809-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:04:25.160510   42644 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:04:25.160755   42644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:04:25.160764   42644 out.go:304] Setting ErrFile to fd 2...
	I0531 19:04:25.160768   42644 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:04:25.160947   42644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:04:25.161128   42644 out.go:298] Setting JSON to false
	I0531 19:04:25.161150   42644 mustload.go:65] Loading cluster: ha-080809
	I0531 19:04:25.161187   42644 notify.go:220] Checking for updates...
	I0531 19:04:25.162051   42644 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:04:25.162084   42644 status.go:255] checking status of ha-080809 ...
	I0531 19:04:25.162884   42644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:04:25.162929   42644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:04:25.177918   42644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0531 19:04:25.178336   42644 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:04:25.178905   42644 main.go:141] libmachine: Using API Version  1
	I0531 19:04:25.178936   42644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:04:25.179254   42644 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:04:25.179407   42644 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 19:04:25.180890   42644 status.go:330] ha-080809 host status = "Running" (err=<nil>)
	I0531 19:04:25.180905   42644 host.go:66] Checking if "ha-080809" exists ...
	I0531 19:04:25.181202   42644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:04:25.181247   42644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:04:25.196101   42644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0531 19:04:25.196439   42644 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:04:25.196829   42644 main.go:141] libmachine: Using API Version  1
	I0531 19:04:25.196851   42644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:04:25.197156   42644 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:04:25.197327   42644 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 19:04:25.200071   42644 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:04:25.200530   42644 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:04:25.200562   42644 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:04:25.200716   42644 host.go:66] Checking if "ha-080809" exists ...
	I0531 19:04:25.201065   42644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:04:25.201107   42644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:04:25.215031   42644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I0531 19:04:25.215388   42644 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:04:25.215774   42644 main.go:141] libmachine: Using API Version  1
	I0531 19:04:25.215802   42644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:04:25.216079   42644 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:04:25.216250   42644 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:04:25.216406   42644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:04:25.216425   42644 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:04:25.219016   42644 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:04:25.219434   42644 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:04:25.219458   42644 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:04:25.219573   42644 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:04:25.219712   42644 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:04:25.219843   42644 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:04:25.219975   42644 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:04:25.306115   42644 ssh_runner.go:195] Run: systemctl --version
	I0531 19:04:25.313100   42644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:04:25.331085   42644 kubeconfig.go:125] found "ha-080809" server: "https://192.168.39.254:8443"
	I0531 19:04:25.331117   42644 api_server.go:166] Checking apiserver status ...
	I0531 19:04:25.331145   42644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:04:25.346029   42644 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5789/cgroup
	W0531 19:04:25.355082   42644 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5789/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:04:25.355131   42644 ssh_runner.go:195] Run: ls
	I0531 19:04:25.359468   42644 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 19:04:30.359947   42644 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0531 19:04:30.359998   42644 retry.go:31] will retry after 215.487179ms: state is "Stopped"
	I0531 19:04:30.576443   42644 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 19:04:35.576930   42644 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0531 19:04:35.576985   42644 retry.go:31] will retry after 254.48432ms: state is "Stopped"
	I0531 19:04:35.832448   42644 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 19:04:36.669214   42644 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0531 19:04:36.669282   42644 retry.go:31] will retry after 375.433318ms: state is "Stopped"
	I0531 19:04:37.044764   42644 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0531 19:04:40.125259   42644 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": dial tcp 192.168.39.254:8443: connect: no route to host
	I0531 19:04:40.125305   42644 status.go:422] ha-080809 apiserver status = Running (err=<nil>)
	I0531 19:04:40.125312   42644 status.go:257] ha-080809 status: &{Name:ha-080809 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:04:40.125347   42644 status.go:255] checking status of ha-080809-m02 ...
	I0531 19:04:40.125639   42644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:04:40.125672   42644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:04:40.140177   42644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45479
	I0531 19:04:40.140637   42644 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:04:40.141124   42644 main.go:141] libmachine: Using API Version  1
	I0531 19:04:40.141143   42644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:04:40.141411   42644 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:04:40.141588   42644 main.go:141] libmachine: (ha-080809-m02) Calling .GetState
	I0531 19:04:40.143067   42644 status.go:330] ha-080809-m02 host status = "Running" (err=<nil>)
	I0531 19:04:40.143085   42644 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 19:04:40.143354   42644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:04:40.143383   42644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:04:40.157085   42644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38675
	I0531 19:04:40.157415   42644 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:04:40.157846   42644 main.go:141] libmachine: Using API Version  1
	I0531 19:04:40.157867   42644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:04:40.158187   42644 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:04:40.158380   42644 main.go:141] libmachine: (ha-080809-m02) Calling .GetIP
	I0531 19:04:40.160708   42644 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:04:40.161064   42644 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 19:04:40.161194   42644 host.go:66] Checking if "ha-080809-m02" exists ...
	I0531 19:04:40.162741   42644 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:04:40.163024   42644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:04:40.163062   42644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:04:40.176946   42644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45093
	I0531 19:04:40.177333   42644 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:04:40.177798   42644 main.go:141] libmachine: Using API Version  1
	I0531 19:04:40.177820   42644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:04:40.178161   42644 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:04:40.178332   42644 main.go:141] libmachine: (ha-080809-m02) Calling .DriverName
	I0531 19:04:40.178497   42644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:04:40.178515   42644 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHHostname
	I0531 19:04:40.181102   42644 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:04:40.181541   42644 main.go:141] libmachine: (ha-080809-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:66:ad", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:42:02 +0000 UTC Type:0 Mac:52:54:00:38:66:ad Iaid: IPaddr:192.168.39.51 Prefix:24 Hostname:ha-080809-m02 Clientid:01:52:54:00:38:66:ad}
	I0531 19:04:40.181563   42644 main.go:141] libmachine: (ha-080809-m02) DBG | domain ha-080809-m02 has defined IP address 192.168.39.51 and MAC address 52:54:00:38:66:ad in network mk-ha-080809
	I0531 19:04:40.181701   42644 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHPort
	I0531 19:04:40.181843   42644 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHKeyPath
	I0531 19:04:40.181986   42644 main.go:141] libmachine: (ha-080809-m02) Calling .GetSSHUsername
	I0531 19:04:40.182102   42644 sshutil.go:53] new ssh client: &{IP:192.168.39.51 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809-m02/id_rsa Username:docker}
	W0531 19:04:58.589192   42644 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.51:22: connect: no route to host
	W0531 19:04:58.589273   42644 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	E0531 19:04:58.589289   42644 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 19:04:58.589298   42644 status.go:257] ha-080809-m02 status: &{Name:ha-080809-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0531 19:04:58.589313   42644 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.51:22: connect: no route to host
	I0531 19:04:58.589321   42644 status.go:255] checking status of ha-080809-m03 ...
	I0531 19:04:58.589681   42644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:04:58.589732   42644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:04:58.604060   42644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41399
	I0531 19:04:58.604523   42644 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:04:58.605048   42644 main.go:141] libmachine: Using API Version  1
	I0531 19:04:58.605068   42644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:04:58.605374   42644 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:04:58.605539   42644 main.go:141] libmachine: (ha-080809-m03) Calling .GetState
	I0531 19:04:58.607036   42644 status.go:330] ha-080809-m03 host status = "Stopped" (err=<nil>)
	I0531 19:04:58.607048   42644 status.go:343] host is not running, skipping remaining checks
	I0531 19:04:58.607056   42644 status.go:257] ha-080809-m03 status: &{Name:ha-080809-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:04:58.607075   42644 status.go:255] checking status of ha-080809-m04 ...
	I0531 19:04:58.607375   42644 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:04:58.607415   42644 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:04:58.621126   42644 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I0531 19:04:58.621474   42644 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:04:58.621879   42644 main.go:141] libmachine: Using API Version  1
	I0531 19:04:58.621898   42644 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:04:58.622226   42644 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:04:58.622432   42644 main.go:141] libmachine: (ha-080809-m04) Calling .GetState
	I0531 19:04:58.623819   42644 status.go:330] ha-080809-m04 host status = "Stopped" (err=<nil>)
	I0531 19:04:58.623840   42644 status.go:343] host is not running, skipping remaining checks
	I0531 19:04:58.623847   42644 status.go:257] ha-080809-m04 status: &{Name:ha-080809-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr": ha-080809
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-080809-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080809-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080809-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:546: status says there are running hosts: args "out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr": ha-080809
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-080809-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080809-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080809-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr": ha-080809
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

                                                
                                                
ha-080809-m02
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-080809-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-080809-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-080809 -n ha-080809
E0531 19:04:58.917236   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-080809 -n ha-080809: exit status 2 (15.580248838s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-080809 logs -n 25: (1.223304159s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m04 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp testdata/cp-test.txt                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809:/home/docker/cp-test_ha-080809-m04_ha-080809.txt                       |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809 sudo cat                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809.txt                                 |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03:/home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m03 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-080809 node stop m02 -v=7                                                     | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-080809 node start m02 -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-080809 -v=7                                                           | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-080809 -v=7                                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-080809 --wait=true -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-080809                                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	| node    | ha-080809 node delete m03 -v=7                                                   | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-080809 stop -v=7                                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 18:50:40
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 18:50:40.725150   39355 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:50:40.725299   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:50:40.725310   39355 out.go:304] Setting ErrFile to fd 2...
	I0531 18:50:40.725314   39355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:50:40.725806   39355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:50:40.726393   39355 out.go:298] Setting JSON to false
	I0531 18:50:40.727315   39355 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5587,"bootTime":1717175854,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:50:40.727373   39355 start.go:139] virtualization: kvm guest
	I0531 18:50:40.729712   39355 out.go:177] * [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:50:40.730928   39355 notify.go:220] Checking for updates...
	I0531 18:50:40.730934   39355 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 18:50:40.732260   39355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:50:40.733721   39355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:50:40.734983   39355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:50:40.736160   39355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:50:40.737350   39355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:50:40.738897   39355 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:50:40.738998   39355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 18:50:40.739474   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:50:40.739526   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:50:40.754250   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46517
	I0531 18:50:40.754604   39355 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:50:40.755116   39355 main.go:141] libmachine: Using API Version  1
	I0531 18:50:40.755134   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:50:40.755455   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:50:40.755636   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.791266   39355 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 18:50:40.792482   39355 start.go:297] selected driver: kvm2
	I0531 18:50:40.792498   39355 start.go:901] validating driver "kvm2" against &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:f
alse freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:50:40.792668   39355 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:50:40.793032   39355 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:50:40.793116   39355 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 18:50:40.807323   39355 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 18:50:40.808038   39355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 18:50:40.808095   39355 cni.go:84] Creating CNI manager for ""
	I0531 18:50:40.808107   39355 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 18:50:40.808161   39355 start.go:340] cluster config:
	{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:50:40.808298   39355 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 18:50:40.810090   39355 out.go:177] * Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	I0531 18:50:40.811542   39355 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:50:40.811577   39355 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 18:50:40.811587   39355 cache.go:56] Caching tarball of preloaded images
	I0531 18:50:40.811660   39355 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 18:50:40.811671   39355 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 18:50:40.811776   39355 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 18:50:40.811945   39355 start.go:360] acquireMachinesLock for ha-080809: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 18:50:40.811981   39355 start.go:364] duration metric: took 19.562µs to acquireMachinesLock for "ha-080809"
	I0531 18:50:40.811994   39355 start.go:96] Skipping create...Using existing machine configuration
	I0531 18:50:40.811999   39355 fix.go:54] fixHost starting: 
	I0531 18:50:40.812235   39355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:50:40.812263   39355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:50:40.825678   39355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43507
	I0531 18:50:40.826052   39355 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:50:40.826541   39355 main.go:141] libmachine: Using API Version  1
	I0531 18:50:40.826561   39355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:50:40.826941   39355 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:50:40.827113   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.827285   39355 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 18:50:40.828664   39355 fix.go:112] recreateIfNeeded on ha-080809: state=Running err=<nil>
	W0531 18:50:40.828681   39355 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 18:50:40.830849   39355 out.go:177] * Updating the running kvm2 "ha-080809" VM ...
	I0531 18:50:40.832124   39355 machine.go:94] provisionDockerMachine start ...
	I0531 18:50:40.832140   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:40.832325   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:40.834878   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.835311   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:40.835338   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.835452   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:40.835609   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.835750   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.835893   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:40.836053   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:40.836242   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:40.836254   39355 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 18:50:40.946220   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:50:40.946244   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:40.946488   39355 buildroot.go:166] provisioning hostname "ha-080809"
	I0531 18:50:40.946518   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:40.946731   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:40.949045   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.949516   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:40.949544   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:40.949665   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:40.949823   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.949971   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:40.950075   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:40.950241   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:40.950442   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:40.950457   39355 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809 && echo "ha-080809" | sudo tee /etc/hostname
	I0531 18:50:41.073470   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 18:50:41.073498   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.076183   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.076574   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.076607   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.076794   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.076941   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.077115   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.077244   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.077375   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:41.077573   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:41.077596   39355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 18:50:41.181990   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 18:50:41.182021   39355 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 18:50:41.182043   39355 buildroot.go:174] setting up certificates
	I0531 18:50:41.182054   39355 provision.go:84] configureAuth start
	I0531 18:50:41.182082   39355 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 18:50:41.182335   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:50:41.184668   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.185020   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.185038   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.185194   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.187153   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.187566   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.187588   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.187756   39355 provision.go:143] copyHostCerts
	I0531 18:50:41.187785   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:50:41.187832   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 18:50:41.187841   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 18:50:41.187901   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 18:50:41.188000   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:50:41.188020   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 18:50:41.188027   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 18:50:41.188052   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 18:50:41.188106   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:50:41.188122   39355 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 18:50:41.188128   39355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 18:50:41.188150   39355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 18:50:41.188221   39355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809 san=[127.0.0.1 192.168.39.2 ha-080809 localhost minikube]
	I0531 18:50:41.316568   39355 provision.go:177] copyRemoteCerts
	I0531 18:50:41.316621   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 18:50:41.316641   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.319005   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.319346   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.319370   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.319552   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.319700   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.319815   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.319912   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:41.404144   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 18:50:41.404210   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 18:50:41.433385   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 18:50:41.433444   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0531 18:50:41.460422   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 18:50:41.460475   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 18:50:41.492171   39355 provision.go:87] duration metric: took 310.107503ms to configureAuth
	I0531 18:50:41.492196   39355 buildroot.go:189] setting minikube options for container-runtime
	I0531 18:50:41.492389   39355 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:50:41.492450   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:41.494816   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.495193   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:41.495231   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:41.495385   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:41.495552   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.495679   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:41.495809   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:41.495952   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:41.496153   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:41.496176   39355 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 18:50:47.173907   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 18:50:47.173967   39355 machine.go:97] duration metric: took 6.341813911s to provisionDockerMachine
	I0531 18:50:47.173978   39355 start.go:293] postStartSetup for "ha-080809" (driver="kvm2")
	I0531 18:50:47.173987   39355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 18:50:47.174004   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.174368   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 18:50:47.174390   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.176840   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.177242   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.177271   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.177380   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.177589   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.177762   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.177941   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.259542   39355 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 18:50:47.263894   39355 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 18:50:47.263914   39355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 18:50:47.263964   39355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 18:50:47.264048   39355 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 18:50:47.264060   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 18:50:47.264146   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 18:50:47.273268   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:50:47.296570   39355 start.go:296] duration metric: took 122.58202ms for postStartSetup
	I0531 18:50:47.296610   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.296878   39355 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0531 18:50:47.296907   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.299205   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.299525   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.299561   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.299648   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.299826   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.299968   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.300101   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	W0531 18:50:47.383092   39355 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0531 18:50:47.383115   39355 fix.go:56] duration metric: took 6.571114673s for fixHost
	I0531 18:50:47.383140   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.385517   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.385862   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.385890   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.386049   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.386232   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.386383   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.386490   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.386627   39355 main.go:141] libmachine: Using SSH client type: native
	I0531 18:50:47.386788   39355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 18:50:47.386798   39355 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 18:50:47.493714   39355 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717181447.480513081
	
	I0531 18:50:47.493743   39355 fix.go:216] guest clock: 1717181447.480513081
	I0531 18:50:47.493763   39355 fix.go:229] Guest: 2024-05-31 18:50:47.480513081 +0000 UTC Remote: 2024-05-31 18:50:47.383123084 +0000 UTC m=+6.690893529 (delta=97.389997ms)
	I0531 18:50:47.493786   39355 fix.go:200] guest clock delta is within tolerance: 97.389997ms
	I0531 18:50:47.493793   39355 start.go:83] releasing machines lock for "ha-080809", held for 6.681803207s
	I0531 18:50:47.493817   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.494049   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:50:47.496470   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.496822   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.496850   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.496953   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497467   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497627   39355 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 18:50:47.497712   39355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 18:50:47.497764   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.497783   39355 ssh_runner.go:195] Run: cat /version.json
	I0531 18:50:47.497813   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 18:50:47.500034   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500151   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500362   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.500384   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500483   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.500640   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:50:47.500660   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:50:47.500664   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.500827   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.500848   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 18:50:47.501008   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.501126   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 18:50:47.501254   39355 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 18:50:47.501378   39355 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 18:50:47.622196   39355 ssh_runner.go:195] Run: systemctl --version
	I0531 18:50:47.697116   39355 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 18:50:48.062218   39355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 18:50:48.099401   39355 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 18:50:48.099472   39355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 18:50:48.196165   39355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 18:50:48.196190   39355 start.go:494] detecting cgroup driver to use...
	I0531 18:50:48.196275   39355 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 18:50:48.277871   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 18:50:48.348651   39355 docker.go:217] disabling cri-docker service (if available) ...
	I0531 18:50:48.348716   39355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 18:50:48.390576   39355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 18:50:48.527080   39355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 18:50:48.863562   39355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 18:50:49.104326   39355 docker.go:233] disabling docker service ...
	I0531 18:50:49.104389   39355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 18:50:49.125823   39355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 18:50:49.143789   39355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 18:50:49.335309   39355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 18:50:49.535963   39355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 18:50:49.550940   39355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 18:50:49.572576   39355 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 18:50:49.572644   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.583923   39355 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 18:50:49.583975   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.595036   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.608828   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.622691   39355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 18:50:49.635836   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.648354   39355 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.660121   39355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 18:50:49.671040   39355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 18:50:49.681113   39355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 18:50:49.690673   39355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:50:49.873408   39355 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 18:52:20.348721   39355 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.475242436s)
	I0531 18:52:20.348763   39355 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 18:52:20.348839   39355 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 18:52:20.355179   39355 start.go:562] Will wait 60s for crictl version
	I0531 18:52:20.355233   39355 ssh_runner.go:195] Run: which crictl
	I0531 18:52:20.359350   39355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 18:52:20.402386   39355 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 18:52:20.402469   39355 ssh_runner.go:195] Run: crio --version
	I0531 18:52:20.434311   39355 ssh_runner.go:195] Run: crio --version
	I0531 18:52:20.472611   39355 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 18:52:20.474062   39355 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 18:52:20.477170   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:52:20.477634   39355 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 18:52:20.477660   39355 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 18:52:20.477870   39355 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 18:52:20.483055   39355 kubeadm.go:877] updating cluster {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 18:52:20.483208   39355 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 18:52:20.483252   39355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:52:20.532843   39355 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:52:20.532863   39355 crio.go:433] Images already preloaded, skipping extraction
	I0531 18:52:20.532904   39355 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 18:52:20.576108   39355 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 18:52:20.576137   39355 cache_images.go:84] Images are preloaded, skipping loading
	I0531 18:52:20.576147   39355 kubeadm.go:928] updating node { 192.168.39.2 8443 v1.30.1 crio true true} ...
	I0531 18:52:20.576283   39355 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 18:52:20.576360   39355 ssh_runner.go:195] Run: crio config
	I0531 18:52:20.622002   39355 cni.go:84] Creating CNI manager for ""
	I0531 18:52:20.622029   39355 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 18:52:20.622042   39355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 18:52:20.622076   39355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-080809 NodeName:ha-080809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 18:52:20.622260   39355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-080809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 18:52:20.622285   39355 kube-vip.go:115] generating kube-vip config ...
	I0531 18:52:20.622331   39355 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 18:52:20.635091   39355 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 18:52:20.635214   39355 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 18:52:20.635270   39355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 18:52:20.645725   39355 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 18:52:20.645808   39355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0531 18:52:20.655828   39355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0531 18:52:20.672625   39355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 18:52:20.690003   39355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0531 18:52:20.706882   39355 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 18:52:20.725424   39355 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 18:52:20.732286   39355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 18:52:20.887007   39355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 18:52:20.902911   39355 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.2
	I0531 18:52:20.902934   39355 certs.go:194] generating shared ca certs ...
	I0531 18:52:20.902958   39355 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:20.903128   39355 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 18:52:20.903167   39355 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 18:52:20.903177   39355 certs.go:256] generating profile certs ...
	I0531 18:52:20.903257   39355 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 18:52:20.903294   39355 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091
	I0531 18:52:20.903308   39355 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.2 192.168.39.51 192.168.39.132 192.168.39.254]
	I0531 18:52:21.121858   39355 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 ...
	I0531 18:52:21.121891   39355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091: {Name:mk82bb224e9d3af07dce0b1684abbe8baa7e4f93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:21.122065   39355 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091 ...
	I0531 18:52:21.122077   39355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091: {Name:mk76a457401d3f1d1b72821102855c52f0a5de63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 18:52:21.122151   39355 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt.bcea5091 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt
	I0531 18:52:21.122297   39355 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091 -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key
	I0531 18:52:21.122420   39355 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 18:52:21.122434   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 18:52:21.122446   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 18:52:21.122459   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 18:52:21.122469   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 18:52:21.122482   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 18:52:21.122494   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 18:52:21.122505   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 18:52:21.122517   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 18:52:21.122566   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 18:52:21.122594   39355 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 18:52:21.122603   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 18:52:21.122625   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 18:52:21.122649   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 18:52:21.122670   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 18:52:21.122704   39355 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 18:52:21.122728   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.122742   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.122754   39355 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.123310   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 18:52:21.148456   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 18:52:21.172795   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 18:52:21.197175   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 18:52:21.221688   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0531 18:52:21.245580   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 18:52:21.269278   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 18:52:21.292738   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 18:52:21.316727   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 18:52:21.340739   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 18:52:21.363482   39355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 18:52:21.386799   39355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 18:52:21.403566   39355 ssh_runner.go:195] Run: openssl version
	I0531 18:52:21.410046   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 18:52:21.422130   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.427059   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.427130   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 18:52:21.433755   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 18:52:21.444010   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 18:52:21.455312   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.459971   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.460018   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 18:52:21.465518   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 18:52:21.475141   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 18:52:21.486477   39355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.491035   39355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.491075   39355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 18:52:21.497086   39355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 18:52:21.506659   39355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 18:52:21.511111   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 18:52:21.516686   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 18:52:21.522265   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 18:52:21.527836   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 18:52:21.533354   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 18:52:21.539272   39355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 18:52:21.544898   39355 kubeadm.go:391] StartCluster: {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:52:21.545057   39355 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 18:52:21.545119   39355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 18:52:21.593095   39355 cri.go:89] found id: "b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e"
	I0531 18:52:21.593117   39355 cri.go:89] found id: "6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89"
	I0531 18:52:21.593121   39355 cri.go:89] found id: "4ddbab57592d0103bc6815778479db8a6144425de981bd8791af83d84165980a"
	I0531 18:52:21.593124   39355 cri.go:89] found id: "7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208"
	I0531 18:52:21.593126   39355 cri.go:89] found id: "02c77070fec86c68c73045dc1e91d4efbc9b5849cc419eacc4885001a015d503"
	I0531 18:52:21.593129   39355 cri.go:89] found id: "d110c3c293be8dd490b1b1832bc510e0413fbcd04faeaf6434250ec703f1fa63"
	I0531 18:52:21.593132   39355 cri.go:89] found id: "3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7"
	I0531 18:52:21.593144   39355 cri.go:89] found id: "6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783"
	I0531 18:52:21.593148   39355 cri.go:89] found id: "dfee637100fd1d000d553477c129ce4eeb3b6cd89076f634caaf1544c2c4474b"
	I0531 18:52:21.593155   39355 cri.go:89] found id: "2c9526bf60d95a664b2192cf7843abde9948688d03701bc8236312ce2861d492"
	I0531 18:52:21.593163   39355 cri.go:89] found id: "54e4715f496465b1912d44e7f94c39ff6bf0da98279fbf625ab1a9b603349679"
	I0531 18:52:21.593167   39355 cri.go:89] found id: "2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860"
	I0531 18:52:21.593171   39355 cri.go:89] found id: "817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3"
	I0531 18:52:21.593176   39355 cri.go:89] found id: "588c0be817c52ee12891e5207cc19b620a59aa847a1782b18af48140f294646e"
	I0531 18:52:21.593182   39355 cri.go:89] found id: "0d31a9e756e1e95bcf40a783411dd9ea63526b4a53804255f676fe7e4045c801"
	I0531 18:52:21.593188   39355 cri.go:89] found id: ""
	I0531 18:52:21.593238   39355 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.543202024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182314543179564,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9048fad4-486c-45e1-9959-3c51be7f2b89 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.543668349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1830767-d38f-455d-b8d7-44b91fa24456 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.543760235Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1830767-d38f-455d-b8d7-44b91fa24456 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.544115528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182250193440558,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pr
otocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.k
ubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string
{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd
7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34
fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1
c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872
c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71d
c0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1830767-d38f-455d-b8d7-44b91fa24456 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.580075824Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34359c29-45dd-4677-a392-96c951894e7b name=/runtime.v1.RuntimeService/Version
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.580149072Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34359c29-45dd-4677-a392-96c951894e7b name=/runtime.v1.RuntimeService/Version
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.587961635Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fdb2f48f-1fed-4399-99a9-ed91350cbabc name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.588391428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182314588369797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fdb2f48f-1fed-4399-99a9-ed91350cbabc name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.589122733Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2399a279-76f1-4dbc-93c9-04aca461bba9 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.589184376Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2399a279-76f1-4dbc-93c9-04aca461bba9 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.589476378Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182250193440558,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pr
otocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.k
ubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string
{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd
7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34
fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1
c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872
c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71d
c0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2399a279-76f1-4dbc-93c9-04aca461bba9 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.626314133Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72c26340-8da6-4b44-bd51-6eb00b4b5717 name=/runtime.v1.RuntimeService/Version
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.626386686Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72c26340-8da6-4b44-bd51-6eb00b4b5717 name=/runtime.v1.RuntimeService/Version
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.627227082Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ccfb9c20-d5be-4634-b703-7f31fda5c616 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.627660385Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182314627623723,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ccfb9c20-d5be-4634-b703-7f31fda5c616 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.628346799Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=be84926a-50ed-432b-93e3-dcbce81ed363 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.628401273Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=be84926a-50ed-432b-93e3-dcbce81ed363 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.628699061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182250193440558,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pr
otocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.k
ubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string
{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd
7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34
fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1
c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872
c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71d
c0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=be84926a-50ed-432b-93e3-dcbce81ed363 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.668737459Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=72df90b2-7da6-49e8-a42e-90b6cf71df3c name=/runtime.v1.RuntimeService/Version
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.668929785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72df90b2-7da6-49e8-a42e-90b6cf71df3c name=/runtime.v1.RuntimeService/Version
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.670240269Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3e1657c7-1c52-4831-88c0-612a71cd9ea4 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.670770118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182314670747409,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3e1657c7-1c52-4831-88c0-612a71cd9ea4 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.671441203Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=daa93540-85d6-4f3f-956d-eb42790c5e91 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.671493236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=daa93540-85d6-4f3f-956d-eb42790c5e91 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:05:14 ha-080809 crio[4230]: time="2024-05-31 19:05:14.671785085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6,PodSandboxId:3abc2bb414217e220c1b09bec960d2f54c62b19cb1c22fc5e40323767b323766,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:5,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182250193440558,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 5,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717181946918460850,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717181597430128154,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717181561462777088,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,
io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181545968157506,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"pr
otocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717181544499991239,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.k
ubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70,PodSandboxId:69407998f4cac82efd3b37d5d48732f01c57fe9ddca165bae7f38acb49fee531,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717181544453356927,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string
{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717181544404110214,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash:
cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e,PodSandboxId:4ea7aaf556078033d439ac62ab0a475e311e52bf9e103b9da65d04c2b595e0e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448511320271,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208,PodSandboxId:f1c2b748b778aadd3fb2e6ecfd03550513d8943d7d49cad44294964a12f05e26,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd
7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717181448165579766,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89,PodSandboxId:12147db6b8bb624c51988b1fd43a4f6b8a384f86a7164e93637dcd2002e27195,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34
fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181448311780312,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1
c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872
c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71d
c0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[s
tring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=daa93540-85d6-4f3f-956d-eb42790c5e91 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	94447cc9162a8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Exited              kube-apiserver            5                   3abc2bb414217       kube-apiserver-ha-080809
	b1290300ca030       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Running             busybox                   0                   33c3621daf2d9       busybox-fc5497c4f-mjhlv
	cb38fb459f7a7       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      11 minutes ago       Running             kube-controller-manager   3                   69407998f4cac       kube-controller-manager-ha-080809
	9f8ac9229e760       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      12 minutes ago       Running             kube-vip                  0                   205250b7981c7       kube-vip-ha-080809
	e769a1e959901       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Running             coredns                   2                   86c8aa4ade84f       coredns-7db6d8ff4d-kbkpk
	02b6c1aa0696a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      12 minutes ago       Running             etcd                      2                   841629e56b325       etcd-ha-080809
	2b4c89df16ec6       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      12 minutes ago       Exited              kube-controller-manager   2                   69407998f4cac       kube-controller-manager-ha-080809
	255424e20e7a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      12 minutes ago       Running             coredns                   2                   581a95181abc3       coredns-7db6d8ff4d-cr2wg
	b3f030d4aefd7       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   1                   4ea7aaf556078       coredns-7db6d8ff4d-cr2wg
	6ced8b7504697       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      14 minutes ago       Exited              coredns                   1                   12147db6b8bb6       coredns-7db6d8ff4d-kbkpk
	7d3a5d770ba93       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      1                   f1c2b748b778a       etcd-ha-080809
	3b39b3e6d2204       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      15 minutes ago       Exited              kindnet-cni               1                   2bdbf3a323266       kindnet-fzzfx
	6ecd8583d0646       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      16 minutes ago       Exited              storage-provisioner       1                   8bd72a9b0e002       storage-provisioner
	2442a39b94718       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      23 minutes ago       Exited              kube-proxy                0                   d23870754e4e8       kube-proxy-dmg2w
	817682d6f214f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      23 minutes ago       Exited              kube-scheduler            0                   a1254b0478464       kube-scheduler-ha-080809
	
	
	==> coredns [255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[2067191073]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:26.688) (total time: 12093ms):
	Trace[2067191073]: ---"Objects listed" error:Unauthorized 12093ms (19:04:38.782)
	Trace[2067191073]: [12.093919623s] [12.093919623s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[2043929497]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:42.543) (total time: 10243ms):
	Trace[2043929497]: ---"Objects listed" error:Unauthorized 10243ms (19:04:52.786)
	Trace[2043929497]: [10.243512004s] [10.243512004s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3159": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3159": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1085576986]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:59.219) (total time: 10428ms):
	Trace[1085576986]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 10428ms (19:05:09.648)
	Trace[1085576986]: [10.428823823s] [10.428823823s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3152": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3152": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:47800 - 31487 "HINFO IN 5755310607458017703.6185900873809144159. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014969161s
	
	
	==> coredns [b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:60635 - 57345 "HINFO IN 1829039879809170348.2413939385768054616. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009746833s
	
	
	==> coredns [e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1845718398]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:33.008) (total time: 12758ms):
	Trace[1845718398]: ---"Objects listed" error:Unauthorized 12758ms (19:04:45.766)
	Trace[1845718398]: [12.758297077s] [12.758297077s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1938946690]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:33.340) (total time: 12426ms):
	Trace[1938946690]: ---"Objects listed" error:Unauthorized 12426ms (19:04:45.766)
	Trace[1938946690]: [12.426636611s] [12.426636611s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3184": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1407987040]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:52.472) (total time: 17174ms):
	Trace[1407987040]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3184": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 17174ms (19:05:09.647)
	Trace[1407987040]: [17.1747591s] [17.1747591s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3184": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[2097501263]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:54.232) (total time: 15415ms):
	Trace[2097501263]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 15415ms (19:05:09.647)
	Trace[2097501263]: [15.415625285s] [15.415625285s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1835612567]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:54.137) (total time: 15510ms):
	Trace[1835612567]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 15510ms (19:05:09.647)
	Trace[1835612567]: [15.510206814s] [15.510206814s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +12.297552] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.055284] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057048] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.190072] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.115739] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.263739] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +4.190448] systemd-fstab-generator[767]: Ignoring "noauto" option for root device
	[  +4.324345] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.063717] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.978929] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075713] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.762573] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.211184] kauditd_printk_skb: 38 callbacks suppressed
	[May31 18:42] kauditd_printk_skb: 26 callbacks suppressed
	[May31 18:50] systemd-fstab-generator[3962]: Ignoring "noauto" option for root device
	[  +0.279335] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +0.265342] systemd-fstab-generator[4055]: Ignoring "noauto" option for root device
	[  +0.167968] systemd-fstab-generator[4067]: Ignoring "noauto" option for root device
	[  +0.358424] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	[May31 18:52] systemd-fstab-generator[4310]: Ignoring "noauto" option for root device
	[  +0.086879] kauditd_printk_skb: 183 callbacks suppressed
	[  +5.033045] kauditd_printk_skb: 66 callbacks suppressed
	[  +7.553563] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.792945] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7] <==
	{"level":"warn","ts":"2024-05-31T19:05:11.271465Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2371303327309347458,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-05-31T19:05:11.772443Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2371303327309347458,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-05-31T19:05:12.074782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:12.074824Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:12.074837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgPreVoteResp from 6c80de388e5020e8 at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:12.074913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 [logterm: 3, index: 3768] sent MsgPreVote request to 29f55ddef6ecd05f at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:12.074923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 [logterm: 3, index: 3768] sent MsgPreVote request to 2a8c31cde4a03aae at term 3"}
	{"level":"warn","ts":"2024-05-31T19:05:12.273217Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2371303327309347458,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-05-31T19:05:12.77404Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2371303327309347458,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-05-31T19:05:13.274544Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2371303327309347458,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-05-31T19:05:13.47475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:13.474805Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:13.47482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgPreVoteResp from 6c80de388e5020e8 at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:13.474835Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 [logterm: 3, index: 3768] sent MsgPreVote request to 29f55ddef6ecd05f at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:13.474888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 [logterm: 3, index: 3768] sent MsgPreVote request to 2a8c31cde4a03aae at term 3"}
	{"level":"warn","ts":"2024-05-31T19:05:13.765751Z","caller":"etcdserver/v3_server.go:909","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"info","ts":"2024-05-31T19:05:14.874395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:14.874445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:14.87446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 received MsgPreVoteResp from 6c80de388e5020e8 at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:14.874476Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 [logterm: 3, index: 3768] sent MsgPreVote request to 29f55ddef6ecd05f at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:14.874483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 [logterm: 3, index: 3768] sent MsgPreVote request to 2a8c31cde4a03aae at term 3"}
	{"level":"warn","ts":"2024-05-31T19:05:14.957317Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"29f55ddef6ecd05f","rtt":"964.972µs","error":"dial tcp 192.168.39.51:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-31T19:05:14.958549Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"29f55ddef6ecd05f","rtt":"10.126681ms","error":"dial tcp 192.168.39.51:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-05-31T19:05:15.035455Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"2a8c31cde4a03aae","rtt":"0s","error":"dial tcp 192.168.39.132:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-31T19:05:15.047746Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"2a8c31cde4a03aae","rtt":"0s","error":"dial tcp 192.168.39.132:2380: i/o timeout"}
	
	
	==> etcd [7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208] <==
	{"level":"info","ts":"2024-05-31T18:50:49.240783Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.240835Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263146Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263375Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T18:50:49.263509Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.263537Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"6c80de388e5020e8","initial-advertise-peer-urls":["https://192.168.39.2:2380"],"listen-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-31T18:50:49.263554Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-31T18:50:49.286001Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.286069Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f","remote-peer-urls":["https://192.168.39.51:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.286337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 7818493287602331880)"}
	{"level":"info","ts":"2024-05-31T18:50:49.286394Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8"}
	{"level":"info","ts":"2024-05-31T18:50:49.28794Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 7818493287602331880) learners=(3065880206706227886)"}
	{"level":"info","ts":"2024-05-31T18:50:49.288056Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8","added-peer-id":"2a8c31cde4a03aae","added-peer-peer-urls":["https://192.168.39.132:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.288411Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.28866Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.29168Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.314187Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.314781Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T18:50:49.315316Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.315418Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.31545Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.315649Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T18:50:49.291777Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-urls":["https://192.168.39.132:2380"]}
	{"level":"info","ts":"2024-05-31T18:50:49.327174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 switched to configuration voters=(3023425937055993951 3065880206706227886 7818493287602331880)"}
	{"level":"info","ts":"2024-05-31T18:50:49.327314Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"e20ba2e00cb0e827","local-member-id":"6c80de388e5020e8"}
	
	
	==> kernel <==
	 19:05:15 up 24 min,  0 users,  load average: 0.28, 0.51, 0.41
	Linux ha-080809 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7] <==
	I0531 18:49:56.699488       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 18:49:56.699638       1 main.go:107] hostIP = 192.168.39.2
	podIP = 192.168.39.2
	I0531 18:49:56.699821       1 main.go:116] setting mtu 1500 for CNI 
	I0531 18:49:56.699938       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 18:49:56.699982       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 18:50:05.446913       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0531 18:50:19.448089       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0531 18:50:20.449642       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0531 18:50:38.351425       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0531 18:50:41.423495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6] <==
	Trace[764757881]: ---"Objects listed" error:etcdserver: request timed out 12998ms (19:05:06.775)
	Trace[764757881]: [12.998820677s] [12.998820677s] END
	E0531 19:05:06.775840       1 cacher.go:475] cacher (prioritylevelconfigurations.flowcontrol.apiserver.k8s.io): unexpected ListAndWatch error: failed to list *flowcontrol.PriorityLevelConfiguration: etcdserver: request timed out; reinitializing...
	I0531 19:05:06.775951       1 trace.go:236] Trace[922907450]: "List(recursive=true) etcd3" audit-id:,key:/statefulsets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (31-May-2024 19:04:53.776) (total time: 12999ms):
	Trace[922907450]: [12.999831813s] [12.999831813s] END
	W0531 19:05:06.778504       1 reflector.go:547] storage/cacher.go:/statefulsets: failed to list *apps.StatefulSet: etcdserver: request timed out
	I0531 19:05:06.778596       1 trace.go:236] Trace[949276398]: "Reflector ListAndWatch" name:storage/cacher.go:/statefulsets (31-May-2024 19:04:53.776) (total time: 13002ms):
	Trace[949276398]: ---"Objects listed" error:etcdserver: request timed out 13002ms (19:05:06.778)
	Trace[949276398]: [13.002492299s] [13.002492299s] END
	I0531 19:05:06.776064       1 trace.go:236] Trace[459578925]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:90aa5b5b-72b1-459b-b1f4-b53972459407,client:127.0.0.1,api-group:scheduling.k8s.io,api-version:v1,name:system-node-critical,subresource:,namespace:,protocol:HTTP/2.0,resource:priorityclasses,scope:resource,url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:GET (31-May-2024 19:04:52.784) (total time: 13991ms):
	Trace[459578925]: [13.99135126s] [13.99135126s] END
	I0531 19:05:06.778751       1 trace.go:236] Trace[1194208900]: "List(recursive=true) etcd3" audit-id:,key:/replicasets,resourceVersion:,resourceVersionMatch:,limit:10000,continue: (31-May-2024 19:04:53.782) (total time: 12995ms):
	Trace[1194208900]: [12.995818746s] [12.995818746s] END
	W0531 19:05:06.778785       1 reflector.go:547] storage/cacher.go:/replicasets: failed to list *apps.ReplicaSet: etcdserver: request timed out
	I0531 19:05:06.778815       1 trace.go:236] Trace[1566761366]: "Reflector ListAndWatch" name:storage/cacher.go:/replicasets (31-May-2024 19:04:53.782) (total time: 12995ms):
	Trace[1566761366]: ---"Objects listed" error:etcdserver: request timed out 12995ms (19:05:06.778)
	Trace[1566761366]: [12.995976837s] [12.995976837s] END
	E0531 19:05:06.778904       1 cacher.go:475] cacher (replicasets.apps): unexpected ListAndWatch error: failed to list *apps.ReplicaSet: etcdserver: request timed out; reinitializing...
	I0531 19:05:06.776117       1 trace.go:236] Trace[1516002162]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:9e061dad-9295-49db-999a-4f376a8ad8db,client:127.0.0.1,api-group:rbac.authorization.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterrolebindings,scope:cluster,url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:LIST (31-May-2024 19:04:59.021) (total time: 7754ms):
	Trace[1516002162]: ["List(recursive=true) etcd3" audit-id:9e061dad-9295-49db-999a-4f376a8ad8db,key:/clusterrolebindings,resourceVersion:0,resourceVersionMatch:,limit:500,continue: 7754ms (19:04:59.021)]
	Trace[1516002162]: [7.754833817s] [7.754833817s] END
	I0531 19:05:06.776152       1 trace.go:236] Trace[2062959651]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:e3bf7c90-9660-48c9-beea-580f21ffc7d4,client:127.0.0.1,api-group:,api-version:v1,name:coredns,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/coredns,user-agent:kube-apiserver/v1.30.1 (linux/amd64) kubernetes/6911225,verb:GET (31-May-2024 19:04:59.225) (total time: 7551ms):
	Trace[2062959651]: [7.551125205s] [7.551125205s] END
	W0531 19:05:06.779158       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: etcdserver: request timed out. Retrying...
	F0531 19:05:06.779200       1 hooks.go:203] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: timed out waiting for the condition
	
	
	==> kube-controller-manager [2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70] <==
	I0531 18:52:25.112838       1 serving.go:380] Generated self-signed cert in-memory
	I0531 18:52:25.407512       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0531 18:52:25.407561       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 18:52:25.409457       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 18:52:25.409602       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 18:52:25.410217       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0531 18:52:25.410274       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0531 18:52:46.236432       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.2:8443/healthz\": dial tcp 192.168.39.2:8443: connect: connection refused"
	
	
	==> kube-controller-manager [cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f] <==
	W0531 19:04:59.113017       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0531 19:04:59.615118       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0531 19:05:00.617484       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0531 19:05:02.619461       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0531 19:05:02.619548       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-080809"
	E0531 19:05:02.619564       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.2:8443/api/v1/nodes/ha-080809\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0531 19:05:02.620504       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0531 19:05:03.122943       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0531 19:05:04.124454       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0531 19:05:06.125671       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	E0531 19:05:06.125789       1 node_lifecycle_controller.go:973] "Error updating node" err="Put \"https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02/status\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node="ha-080809-m02"
	W0531 19:05:06.126684       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0531 19:05:06.628148       1 client_builder_dynamic.go:197] get or create service account failed: serviceaccounts "node-controller" is forbidden: User "system:kube-controller-manager" cannot get resource "serviceaccounts" in API group "" in the namespace "kube-system"
	W0531 19:05:07.629374       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.2:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.2:8443: connect: connection refused
	W0531 19:05:09.630621       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.2:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.2:8443: connect: connection refused
	E0531 19:05:09.630762       1 node_lifecycle_controller.go:715] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="ha-080809-m02"
	E0531 19:05:09.630839       1 node_lifecycle_controller.go:720] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.2:8443/api/v1/nodes/ha-080809-m02\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-lifecycle-controller" node=""
	W0531 19:05:11.278931       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.FlowSchema: Get "https://192.168.39.2:8443/apis/flowcontrol.apiserver.k8s.io/v1/flowschemas?resourceVersion=3178": dial tcp 192.168.39.2:8443: connect: connection refused
	E0531 19:05:11.279098       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.FlowSchema: failed to list *v1.FlowSchema: Get "https://192.168.39.2:8443/apis/flowcontrol.apiserver.k8s.io/v1/flowschemas?resourceVersion=3178": dial tcp 192.168.39.2:8443: connect: connection refused
	W0531 19:05:12.500765       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.2:8443/api/v1/persistentvolumeclaims?resourceVersion=3193": dial tcp 192.168.39.2:8443: connect: connection refused
	E0531 19:05:12.500961       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.2:8443/api/v1/persistentvolumeclaims?resourceVersion=3193": dial tcp 192.168.39.2:8443: connect: connection refused
	W0531 19:05:13.756276       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ValidatingWebhookConfiguration: Get "https://192.168.39.2:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=3186": dial tcp 192.168.39.2:8443: connect: connection refused
	E0531 19:05:13.756347       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingWebhookConfiguration: failed to list *v1.ValidatingWebhookConfiguration: Get "https://192.168.39.2:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?resourceVersion=3186": dial tcp 192.168.39.2:8443: connect: connection refused
	W0531 19:05:14.631701       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.2:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.2:8443: connect: connection refused
	W0531 19:05:15.132270       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.39.2:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.39.2:8443: connect: connection refused
	
	
	==> kube-proxy [2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860] <==
	E0531 18:49:37.041507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.257750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.257924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.258008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:55.471454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:55.471631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:58.544241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:58.544552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:58.544816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:58.545008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:16.977062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:16.977207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:23.119757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:23.120054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:26.191520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:26.191634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3] <==
	W0531 18:50:37.956048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:37.956139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:38.789958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:50:38.790048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:50:39.380066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:50:39.380145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:50:39.438437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:39.438537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:39.864506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:50:39.864612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:50:39.954574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:39.954665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:40.130549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:40.130602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:40.173351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:50:40.173447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:50:40.275786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:50:40.275960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:50:40.403575       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:50:40.403680       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:50:40.405616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:50:40.405714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:50:40.695638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:50:40.695697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:50:41.623659       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420031    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420062    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\\\" already exists\"" pod="kube-system/kindnet-fzzfx" podUID="1f8aa910-a7fc-44cf-88ad-dbf14049ac1d"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420101    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420114    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420124    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420146    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\\\" already exists\"" pod="kube-system/kube-scheduler-ha-080809" podUID="65d8377fef588f9e0e1afe5ebb475786"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420551    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420599    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists" pod="kube-system/kube-proxy-dmg2w"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420614    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists" pod="kube-system/kube-proxy-dmg2w"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.420640    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\\\" already exists\"" pod="kube-system/kube-proxy-dmg2w" podUID="50600abe-9fae-41f8-8ff2-05c8407163c0"
	May 31 19:05:08 ha-080809 kubelet[1363]: I0531 19:05:08.477633    1363 scope.go:117] "RemoveContainer" containerID="94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6"
	May 31 19:05:08 ha-080809 kubelet[1363]: E0531 19:05:08.478078    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-080809_kube-system(63f626d576ff021f9c6d97fcb391e443)\"" pod="kube-system/kube-apiserver-ha-080809" podUID="63f626d576ff021f9c6d97fcb391e443"
	May 31 19:05:10 ha-080809 kubelet[1363]: E0531 19:05:10.417630    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists"
	May 31 19:05:10 ha-080809 kubelet[1363]: E0531 19:05:10.417677    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:05:10 ha-080809 kubelet[1363]: E0531 19:05:10.417693    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:05:10 ha-080809 kubelet[1363]: E0531 19:05:10.417755    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="11521630-b23f-4574-9eda-8d0e5d0b7e10"
	May 31 19:05:10 ha-080809 kubelet[1363]: I0531 19:05:10.459357    1363 scope.go:117] "RemoveContainer" containerID="94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6"
	May 31 19:05:10 ha-080809 kubelet[1363]: E0531 19:05:10.459778    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ha-080809_kube-system(63f626d576ff021f9c6d97fcb391e443)\"" pod="kube-system/kube-apiserver-ha-080809" podUID="63f626d576ff021f9c6d97fcb391e443"
	May 31 19:05:11 ha-080809 kubelet[1363]: E0531 19:05:11.183390    1363 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-080809\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-080809?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 31 19:05:11 ha-080809 kubelet[1363]: E0531 19:05:11.183608    1363 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-080809?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	May 31 19:05:11 ha-080809 kubelet[1363]: I0531 19:05:11.183945    1363 status_manager.go:853] "Failed to get status for pod" podUID="63f626d576ff021f9c6d97fcb391e443" pod="kube-system/kube-apiserver-ha-080809" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 31 19:05:14 ha-080809 kubelet[1363]: I0531 19:05:14.255546    1363 status_manager.go:853] "Failed to get status for pod" podUID="63f626d576ff021f9c6d97fcb391e443" pod="kube-system/kube-apiserver-ha-080809" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-080809\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 31 19:05:14 ha-080809 kubelet[1363]: E0531 19:05:14.255668    1363 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-080809\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-080809?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	May 31 19:05:14 ha-080809 kubelet[1363]: W0531 19:05:14.256335    1363 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=3153": dial tcp 192.168.39.254:8443: connect: no route to host
	May 31 19:05:14 ha-080809 kubelet[1363]: E0531 19:05:14.256396    1363 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/default/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=3153": dial tcp 192.168.39.254:8443: connect: no route to host
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:05:14.272093   42899 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18995-11107/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-080809 -n ha-080809
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-080809 -n ha-080809: exit status 2 (221.12236ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-080809" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (174.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (335.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-080809 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0531 19:09:58.916751   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-080809 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: signal: killed (5m33.595987289s)

                                                
                                                
-- stdout --
	* [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	* Updating the running kvm2 "ha-080809" VM ...
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	* Enabled addons: 
	
	* Starting "ha-080809-m02" control-plane node in "ha-080809" cluster
	* Restarting existing kvm2 VM for "ha-080809-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.39.2
	* Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	  - env NO_PROXY=192.168.39.2
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:05:15.728041   42953 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:05:15.728307   42953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:05:15.728317   42953 out.go:304] Setting ErrFile to fd 2...
	I0531 19:05:15.728323   42953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:05:15.728498   42953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:05:15.729053   42953 out.go:298] Setting JSON to false
	I0531 19:05:15.729963   42953 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6462,"bootTime":1717175854,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:05:15.730020   42953 start.go:139] virtualization: kvm guest
	I0531 19:05:15.732353   42953 out.go:177] * [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:05:15.733604   42953 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 19:05:15.733570   42953 notify.go:220] Checking for updates...
	I0531 19:05:15.734791   42953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:05:15.736168   42953 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 19:05:15.737485   42953 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:05:15.738771   42953 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:05:15.740086   42953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:05:15.741622   42953 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:05:15.742073   42953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:05:15.742115   42953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:05:15.756716   42953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0531 19:05:15.757191   42953 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:05:15.757800   42953 main.go:141] libmachine: Using API Version  1
	I0531 19:05:15.757819   42953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:05:15.758187   42953 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:05:15.758405   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:05:15.758675   42953 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 19:05:15.758974   42953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:05:15.759016   42953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:05:15.773332   42953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0531 19:05:15.773760   42953 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:05:15.774166   42953 main.go:141] libmachine: Using API Version  1
	I0531 19:05:15.774190   42953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:05:15.774523   42953 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:05:15.774717   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:05:15.807659   42953 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 19:05:15.808921   42953 start.go:297] selected driver: kvm2
	I0531 19:05:15.808935   42953 start.go:901] validating driver "kvm2" against &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:05:15.809132   42953 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:05:15.809463   42953 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:05:15.809549   42953 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 19:05:15.824273   42953 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 19:05:15.824907   42953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:05:15.824946   42953 cni.go:84] Creating CNI manager for ""
	I0531 19:05:15.824971   42953 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 19:05:15.825026   42953 start.go:340] cluster config:
	{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:05:15.825163   42953 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:05:15.827069   42953 out.go:177] * Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	I0531 19:05:15.828443   42953 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:05:15.828478   42953 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 19:05:15.828494   42953 cache.go:56] Caching tarball of preloaded images
	I0531 19:05:15.828565   42953 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 19:05:15.828580   42953 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 19:05:15.828724   42953 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 19:05:15.828978   42953 start.go:360] acquireMachinesLock for ha-080809: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 19:05:15.829032   42953 start.go:364] duration metric: took 30.17µs to acquireMachinesLock for "ha-080809"
	I0531 19:05:15.829050   42953 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:05:15.829060   42953 fix.go:54] fixHost starting: 
	I0531 19:05:15.829331   42953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:05:15.829366   42953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:05:15.842965   42953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33803
	I0531 19:05:15.843443   42953 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:05:15.843968   42953 main.go:141] libmachine: Using API Version  1
	I0531 19:05:15.843989   42953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:05:15.844314   42953 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:05:15.844601   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:05:15.844753   42953 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 19:05:15.846218   42953 fix.go:112] recreateIfNeeded on ha-080809: state=Running err=<nil>
	W0531 19:05:15.846235   42953 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 19:05:15.849070   42953 out.go:177] * Updating the running kvm2 "ha-080809" VM ...
	I0531 19:05:15.850492   42953 machine.go:94] provisionDockerMachine start ...
	I0531 19:05:15.850510   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:05:15.850680   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:15.853088   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:15.853514   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:15.853547   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:15.853650   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:15.853812   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:15.853970   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:15.854070   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:15.854235   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:05:15.854458   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:05:15.854470   42953 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 19:05:15.961788   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 19:05:15.961811   42953 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 19:05:15.962024   42953 buildroot.go:166] provisioning hostname "ha-080809"
	I0531 19:05:15.962045   42953 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 19:05:15.962218   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:15.964566   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:15.964870   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:15.964899   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:15.965049   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:15.965205   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:15.965368   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:15.965520   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:15.965688   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:05:15.965873   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:05:15.965888   42953 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809 && echo "ha-080809" | sudo tee /etc/hostname
	I0531 19:05:16.088529   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 19:05:16.088553   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:16.091138   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.091610   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.091637   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.091847   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:16.092038   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.092188   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.092361   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:16.092552   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:05:16.092758   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:05:16.092782   42953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:05:16.201576   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:05:16.201609   42953 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 19:05:16.201635   42953 buildroot.go:174] setting up certificates
	I0531 19:05:16.201644   42953 provision.go:84] configureAuth start
	I0531 19:05:16.201655   42953 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 19:05:16.201882   42953 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 19:05:16.204415   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.204790   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.204805   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.204942   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:16.207116   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.207481   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.207499   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.207636   42953 provision.go:143] copyHostCerts
	I0531 19:05:16.207667   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:05:16.207715   42953 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 19:05:16.207730   42953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:05:16.207807   42953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 19:05:16.207909   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:05:16.207935   42953 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 19:05:16.207943   42953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:05:16.207986   42953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 19:05:16.208062   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:05:16.208085   42953 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 19:05:16.208092   42953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:05:16.208130   42953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 19:05:16.208223   42953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809 san=[127.0.0.1 192.168.39.2 ha-080809 localhost minikube]
	I0531 19:05:16.256616   42953 provision.go:177] copyRemoteCerts
	I0531 19:05:16.256666   42953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:05:16.256695   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:16.258947   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.259263   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.259289   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.259453   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:16.259598   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.259734   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:16.259840   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:05:16.347393   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 19:05:16.347461   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 19:05:16.372707   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 19:05:16.372763   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0531 19:05:16.397723   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 19:05:16.397811   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:05:16.422907   42953 provision.go:87] duration metric: took 221.252728ms to configureAuth
	I0531 19:05:16.422938   42953 buildroot.go:189] setting minikube options for container-runtime
	I0531 19:05:16.423186   42953 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:05:16.423263   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:16.425929   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.426295   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.426311   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.426584   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:16.426769   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.426931   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.427077   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:16.427304   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:05:16.427481   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:05:16.427498   42953 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:06:47.230465   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:06:47.230494   42953 machine.go:97] duration metric: took 1m31.379986436s to provisionDockerMachine
	I0531 19:06:47.230508   42953 start.go:293] postStartSetup for "ha-080809" (driver="kvm2")
	I0531 19:06:47.230522   42953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:06:47.230544   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.230837   42953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:06:47.230858   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.234069   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.234523   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.234541   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.234704   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.234867   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.234993   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.235155   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:06:47.319937   42953 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:06:47.324721   42953 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 19:06:47.324744   42953 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 19:06:47.324795   42953 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 19:06:47.324859   42953 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 19:06:47.324869   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 19:06:47.324952   42953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:06:47.335353   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:06:47.360918   42953 start.go:296] duration metric: took 130.39778ms for postStartSetup
	I0531 19:06:47.360982   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.361257   42953 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0531 19:06:47.361280   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.363788   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.364187   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.364222   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.364383   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.364546   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.364699   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.364819   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	W0531 19:06:47.446823   42953 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0531 19:06:47.446842   42953 fix.go:56] duration metric: took 1m31.617783514s for fixHost
	I0531 19:06:47.446861   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.449225   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.449572   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.449601   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.449686   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.449843   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.449991   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.450187   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.450353   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:06:47.450541   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:06:47.450553   42953 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0531 19:06:47.553728   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717182407.525069982
	
	I0531 19:06:47.553750   42953 fix.go:216] guest clock: 1717182407.525069982
	I0531 19:06:47.553759   42953 fix.go:229] Guest: 2024-05-31 19:06:47.525069982 +0000 UTC Remote: 2024-05-31 19:06:47.446849324 +0000 UTC m=+91.750573158 (delta=78.220658ms)
	I0531 19:06:47.553780   42953 fix.go:200] guest clock delta is within tolerance: 78.220658ms
	I0531 19:06:47.553786   42953 start.go:83] releasing machines lock for "ha-080809", held for 1m31.724742792s
	I0531 19:06:47.553804   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.554061   42953 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 19:06:47.556766   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.557173   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.557194   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.557366   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.557767   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.557952   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.558037   42953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:06:47.558090   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.558222   42953 ssh_runner.go:195] Run: cat /version.json
	I0531 19:06:47.558247   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.560479   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.560726   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.560884   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.560909   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.561051   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.561106   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.561129   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.561209   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.561378   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.561392   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.561549   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:06:47.561580   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.561676   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.561822   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:06:47.637258   42953 ssh_runner.go:195] Run: systemctl --version
	I0531 19:06:47.665572   42953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:06:47.825588   42953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 19:06:47.831844   42953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 19:06:47.831895   42953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:06:47.841013   42953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 19:06:47.841025   42953 start.go:494] detecting cgroup driver to use...
	I0531 19:06:47.841083   42953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:06:47.857172   42953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:06:47.870612   42953 docker.go:217] disabling cri-docker service (if available) ...
	I0531 19:06:47.870655   42953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:06:47.883514   42953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:06:47.896434   42953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:06:48.041355   42953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:06:48.188309   42953 docker.go:233] disabling docker service ...
	I0531 19:06:48.188387   42953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:06:48.206186   42953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:06:48.219529   42953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:06:48.359789   42953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:06:48.504832   42953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:06:48.519261   42953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:06:48.537606   42953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:06:48.537655   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.548170   42953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:06:48.548225   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.558561   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.568720   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.578919   42953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:06:48.589249   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.599356   42953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.610126   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.620219   42953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:06:48.629410   42953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:06:48.638718   42953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:06:48.775372   42953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:06:55.839370   42953 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.063960081s)
	I0531 19:06:55.839402   42953 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:06:55.839456   42953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:06:55.844656   42953 start.go:562] Will wait 60s for crictl version
	I0531 19:06:55.844716   42953 ssh_runner.go:195] Run: which crictl
	I0531 19:06:55.848552   42953 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:06:55.890589   42953 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 19:06:55.890672   42953 ssh_runner.go:195] Run: crio --version
	I0531 19:06:55.919723   42953 ssh_runner.go:195] Run: crio --version
	I0531 19:06:55.950488   42953 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 19:06:55.952068   42953 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 19:06:55.954611   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:55.954934   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:55.954971   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:55.955177   42953 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 19:06:55.960058   42953 kubeadm.go:877] updating cluster {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 19:06:55.960195   42953 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:06:55.960236   42953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:06:56.005050   42953 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:06:56.005068   42953 crio.go:433] Images already preloaded, skipping extraction
	I0531 19:06:56.005118   42953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:06:56.040470   42953 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:06:56.040488   42953 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:06:56.040501   42953 kubeadm.go:928] updating node { 192.168.39.2 8443 v1.30.1 crio true true} ...
	I0531 19:06:56.040583   42953 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 19:06:56.040642   42953 ssh_runner.go:195] Run: crio config
	I0531 19:06:56.086792   42953 cni.go:84] Creating CNI manager for ""
	I0531 19:06:56.086813   42953 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 19:06:56.086821   42953 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 19:06:56.086841   42953 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-080809 NodeName:ha-080809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:06:56.086980   42953 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-080809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:06:56.086996   42953 kube-vip.go:115] generating kube-vip config ...
	I0531 19:06:56.087034   42953 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 19:06:56.100769   42953 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 19:06:56.100857   42953 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 19:06:56.100907   42953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 19:06:56.112303   42953 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:06:56.112354   42953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0531 19:06:56.123865   42953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0531 19:06:56.142116   42953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:06:56.159914   42953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0531 19:06:56.177772   42953 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 19:06:56.197290   42953 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 19:06:56.201597   42953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:06:56.349094   42953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 19:06:56.364626   42953 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.2
	I0531 19:06:56.364644   42953 certs.go:194] generating shared ca certs ...
	I0531 19:06:56.364660   42953 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:06:56.364809   42953 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 19:06:56.364857   42953 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 19:06:56.364883   42953 certs.go:256] generating profile certs ...
	I0531 19:06:56.365018   42953 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 19:06:56.365092   42953 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091
	I0531 19:06:56.365140   42953 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 19:06:56.365154   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 19:06:56.365182   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 19:06:56.365199   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 19:06:56.365215   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 19:06:56.365229   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 19:06:56.365256   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 19:06:56.365275   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 19:06:56.365293   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 19:06:56.365368   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 19:06:56.365407   42953 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 19:06:56.365420   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:06:56.365457   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 19:06:56.365488   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:06:56.365524   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 19:06:56.365578   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:06:56.365611   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 19:06:56.365631   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 19:06:56.365648   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:06:56.366267   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:06:56.389857   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:06:56.412315   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:06:56.434640   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:06:56.457113   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0531 19:06:56.479636   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:06:56.502355   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:06:56.525781   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:06:56.548991   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 19:06:56.580035   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 19:06:56.680077   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:06:56.727782   42953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:06:56.770785   42953 ssh_runner.go:195] Run: openssl version
	I0531 19:06:56.786236   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 19:06:56.804516   42953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 19:06:56.814189   42953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:06:56.814242   42953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 19:06:56.824144   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 19:06:56.855733   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 19:06:56.875030   42953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 19:06:56.880665   42953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:06:56.880708   42953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 19:06:56.903708   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:06:56.930964   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:06:56.949631   42953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:06:56.958031   42953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:06:56.958092   42953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:06:56.973007   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:06:56.994914   42953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:06:57.012949   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 19:06:57.028340   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 19:06:57.040596   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 19:06:57.061342   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 19:06:57.069072   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 19:06:57.075599   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 19:06:57.082955   42953 kubeadm.go:391] StartCluster: {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:06:57.083070   42953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:06:57.083135   42953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:06:57.144737   42953 cri.go:89] found id: "e3f8595fe9f29359f6dfa840fafa0b075678b8d65d5c05c9ae61c29de90607ec"
	I0531 19:06:57.144763   42953 cri.go:89] found id: "9f27d07587e1c2854bd0f804926b4febf80b436d2881c6a863f46e97523caddb"
	I0531 19:06:57.144769   42953 cri.go:89] found id: "94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6"
	I0531 19:06:57.144774   42953 cri.go:89] found id: "cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f"
	I0531 19:06:57.144778   42953 cri.go:89] found id: "9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6"
	I0531 19:06:57.144782   42953 cri.go:89] found id: "e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a"
	I0531 19:06:57.144786   42953 cri.go:89] found id: "02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7"
	I0531 19:06:57.144790   42953 cri.go:89] found id: "2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70"
	I0531 19:06:57.144794   42953 cri.go:89] found id: "255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4"
	I0531 19:06:57.144802   42953 cri.go:89] found id: "b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e"
	I0531 19:06:57.144809   42953 cri.go:89] found id: "6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89"
	I0531 19:06:57.144814   42953 cri.go:89] found id: "7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208"
	I0531 19:06:57.144818   42953 cri.go:89] found id: "3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7"
	I0531 19:06:57.144823   42953 cri.go:89] found id: "6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783"
	I0531 19:06:57.144832   42953 cri.go:89] found id: "2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860"
	I0531 19:06:57.144839   42953 cri.go:89] found id: "817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3"
	I0531 19:06:57.144843   42953 cri.go:89] found id: ""
	I0531 19:06:57.144897   42953 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-080809 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-080809 -n ha-080809
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-080809 logs -n 25: (1.565103419s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m04 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp testdata/cp-test.txt                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809:/home/docker/cp-test_ha-080809-m04_ha-080809.txt                       |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809 sudo cat                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809.txt                                 |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m02:/home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m02 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m03:/home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n                                                                 | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | ha-080809-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-080809 ssh -n ha-080809-m03 sudo cat                                          | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC | 31 May 24 18:45 UTC |
	|         | /home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-080809 node stop m02 -v=7                                                     | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:45 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-080809 node start m02 -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-080809 -v=7                                                           | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-080809 -v=7                                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-080809 --wait=true -v=7                                                    | ha-080809 | jenkins | v1.33.1 | 31 May 24 18:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-080809                                                                | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	| node    | ha-080809 node delete m03 -v=7                                                   | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-080809 stop -v=7                                                              | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:02 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-080809 --wait=true                                                         | ha-080809 | jenkins | v1.33.1 | 31 May 24 19:05 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=kvm2                                                                    |           |         |         |                     |                     |
	|         | --container-runtime=crio                                                         |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 19:05:15
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:05:15.728041   42953 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:05:15.728307   42953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:05:15.728317   42953 out.go:304] Setting ErrFile to fd 2...
	I0531 19:05:15.728323   42953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:05:15.728498   42953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:05:15.729053   42953 out.go:298] Setting JSON to false
	I0531 19:05:15.729963   42953 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6462,"bootTime":1717175854,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:05:15.730020   42953 start.go:139] virtualization: kvm guest
	I0531 19:05:15.732353   42953 out.go:177] * [ha-080809] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:05:15.733604   42953 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 19:05:15.733570   42953 notify.go:220] Checking for updates...
	I0531 19:05:15.734791   42953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:05:15.736168   42953 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 19:05:15.737485   42953 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:05:15.738771   42953 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:05:15.740086   42953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:05:15.741622   42953 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:05:15.742073   42953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:05:15.742115   42953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:05:15.756716   42953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I0531 19:05:15.757191   42953 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:05:15.757800   42953 main.go:141] libmachine: Using API Version  1
	I0531 19:05:15.757819   42953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:05:15.758187   42953 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:05:15.758405   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:05:15.758675   42953 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 19:05:15.758974   42953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:05:15.759016   42953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:05:15.773332   42953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45565
	I0531 19:05:15.773760   42953 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:05:15.774166   42953 main.go:141] libmachine: Using API Version  1
	I0531 19:05:15.774190   42953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:05:15.774523   42953 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:05:15.774717   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:05:15.807659   42953 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 19:05:15.808921   42953 start.go:297] selected driver: kvm2
	I0531 19:05:15.808935   42953 start.go:901] validating driver "kvm2" against &{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false e
fk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:05:15.809132   42953 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:05:15.809463   42953 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:05:15.809549   42953 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 19:05:15.824273   42953 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 19:05:15.824907   42953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:05:15.824946   42953 cni.go:84] Creating CNI manager for ""
	I0531 19:05:15.824971   42953 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 19:05:15.825026   42953 start.go:340] cluster config:
	{Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller
:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:
0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:05:15.825163   42953 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:05:15.827069   42953 out.go:177] * Starting "ha-080809" primary control-plane node in "ha-080809" cluster
	I0531 19:05:15.828443   42953 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:05:15.828478   42953 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 19:05:15.828494   42953 cache.go:56] Caching tarball of preloaded images
	I0531 19:05:15.828565   42953 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 19:05:15.828580   42953 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 19:05:15.828724   42953 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/config.json ...
	I0531 19:05:15.828978   42953 start.go:360] acquireMachinesLock for ha-080809: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 19:05:15.829032   42953 start.go:364] duration metric: took 30.17µs to acquireMachinesLock for "ha-080809"
	I0531 19:05:15.829050   42953 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:05:15.829060   42953 fix.go:54] fixHost starting: 
	I0531 19:05:15.829331   42953 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:05:15.829366   42953 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:05:15.842965   42953 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33803
	I0531 19:05:15.843443   42953 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:05:15.843968   42953 main.go:141] libmachine: Using API Version  1
	I0531 19:05:15.843989   42953 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:05:15.844314   42953 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:05:15.844601   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:05:15.844753   42953 main.go:141] libmachine: (ha-080809) Calling .GetState
	I0531 19:05:15.846218   42953 fix.go:112] recreateIfNeeded on ha-080809: state=Running err=<nil>
	W0531 19:05:15.846235   42953 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 19:05:15.849070   42953 out.go:177] * Updating the running kvm2 "ha-080809" VM ...
	I0531 19:05:15.850492   42953 machine.go:94] provisionDockerMachine start ...
	I0531 19:05:15.850510   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:05:15.850680   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:15.853088   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:15.853514   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:15.853547   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:15.853650   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:15.853812   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:15.853970   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:15.854070   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:15.854235   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:05:15.854458   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:05:15.854470   42953 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 19:05:15.961788   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 19:05:15.961811   42953 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 19:05:15.962024   42953 buildroot.go:166] provisioning hostname "ha-080809"
	I0531 19:05:15.962045   42953 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 19:05:15.962218   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:15.964566   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:15.964870   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:15.964899   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:15.965049   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:15.965205   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:15.965368   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:15.965520   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:15.965688   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:05:15.965873   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:05:15.965888   42953 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-080809 && echo "ha-080809" | sudo tee /etc/hostname
	I0531 19:05:16.088529   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-080809
	
	I0531 19:05:16.088553   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:16.091138   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.091610   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.091637   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.091847   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:16.092038   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.092188   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.092361   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:16.092552   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:05:16.092758   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:05:16.092782   42953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-080809' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-080809/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-080809' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:05:16.201576   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:05:16.201609   42953 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 19:05:16.201635   42953 buildroot.go:174] setting up certificates
	I0531 19:05:16.201644   42953 provision.go:84] configureAuth start
	I0531 19:05:16.201655   42953 main.go:141] libmachine: (ha-080809) Calling .GetMachineName
	I0531 19:05:16.201882   42953 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 19:05:16.204415   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.204790   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.204805   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.204942   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:16.207116   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.207481   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.207499   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.207636   42953 provision.go:143] copyHostCerts
	I0531 19:05:16.207667   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:05:16.207715   42953 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 19:05:16.207730   42953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:05:16.207807   42953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 19:05:16.207909   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:05:16.207935   42953 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 19:05:16.207943   42953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:05:16.207986   42953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 19:05:16.208062   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:05:16.208085   42953 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 19:05:16.208092   42953 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:05:16.208130   42953 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 19:05:16.208223   42953 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.ha-080809 san=[127.0.0.1 192.168.39.2 ha-080809 localhost minikube]
	I0531 19:05:16.256616   42953 provision.go:177] copyRemoteCerts
	I0531 19:05:16.256666   42953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:05:16.256695   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:16.258947   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.259263   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.259289   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.259453   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:16.259598   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.259734   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:16.259840   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:05:16.347393   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 19:05:16.347461   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 19:05:16.372707   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 19:05:16.372763   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0531 19:05:16.397723   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 19:05:16.397811   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:05:16.422907   42953 provision.go:87] duration metric: took 221.252728ms to configureAuth
	I0531 19:05:16.422938   42953 buildroot.go:189] setting minikube options for container-runtime
	I0531 19:05:16.423186   42953 config.go:182] Loaded profile config "ha-080809": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:05:16.423263   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:05:16.425929   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.426295   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:05:16.426311   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:05:16.426584   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:05:16.426769   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.426931   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:05:16.427077   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:05:16.427304   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:05:16.427481   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:05:16.427498   42953 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:06:47.230465   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:06:47.230494   42953 machine.go:97] duration metric: took 1m31.379986436s to provisionDockerMachine
	I0531 19:06:47.230508   42953 start.go:293] postStartSetup for "ha-080809" (driver="kvm2")
	I0531 19:06:47.230522   42953 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:06:47.230544   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.230837   42953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:06:47.230858   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.234069   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.234523   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.234541   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.234704   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.234867   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.234993   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.235155   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:06:47.319937   42953 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:06:47.324721   42953 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 19:06:47.324744   42953 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 19:06:47.324795   42953 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 19:06:47.324859   42953 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 19:06:47.324869   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 19:06:47.324952   42953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:06:47.335353   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:06:47.360918   42953 start.go:296] duration metric: took 130.39778ms for postStartSetup
	I0531 19:06:47.360982   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.361257   42953 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0531 19:06:47.361280   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.363788   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.364187   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.364222   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.364383   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.364546   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.364699   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.364819   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	W0531 19:06:47.446823   42953 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0531 19:06:47.446842   42953 fix.go:56] duration metric: took 1m31.617783514s for fixHost
	I0531 19:06:47.446861   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.449225   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.449572   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.449601   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.449686   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.449843   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.449991   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.450187   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.450353   42953 main.go:141] libmachine: Using SSH client type: native
	I0531 19:06:47.450541   42953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.2 22 <nil> <nil>}
	I0531 19:06:47.450553   42953 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 19:06:47.553728   42953 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717182407.525069982
	
	I0531 19:06:47.553750   42953 fix.go:216] guest clock: 1717182407.525069982
	I0531 19:06:47.553759   42953 fix.go:229] Guest: 2024-05-31 19:06:47.525069982 +0000 UTC Remote: 2024-05-31 19:06:47.446849324 +0000 UTC m=+91.750573158 (delta=78.220658ms)
	I0531 19:06:47.553780   42953 fix.go:200] guest clock delta is within tolerance: 78.220658ms
	I0531 19:06:47.553786   42953 start.go:83] releasing machines lock for "ha-080809", held for 1m31.724742792s
	I0531 19:06:47.553804   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.554061   42953 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 19:06:47.556766   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.557173   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.557194   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.557366   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.557767   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.557952   42953 main.go:141] libmachine: (ha-080809) Calling .DriverName
	I0531 19:06:47.558037   42953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:06:47.558090   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.558222   42953 ssh_runner.go:195] Run: cat /version.json
	I0531 19:06:47.558247   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHHostname
	I0531 19:06:47.560479   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.560726   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.560884   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.560909   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.561051   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.561106   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:47.561129   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:47.561209   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.561378   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.561392   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHPort
	I0531 19:06:47.561549   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:06:47.561580   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHKeyPath
	I0531 19:06:47.561676   42953 main.go:141] libmachine: (ha-080809) Calling .GetSSHUsername
	I0531 19:06:47.561822   42953 sshutil.go:53] new ssh client: &{IP:192.168.39.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/ha-080809/id_rsa Username:docker}
	I0531 19:06:47.637258   42953 ssh_runner.go:195] Run: systemctl --version
	I0531 19:06:47.665572   42953 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:06:47.825588   42953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 19:06:47.831844   42953 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 19:06:47.831895   42953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:06:47.841013   42953 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 19:06:47.841025   42953 start.go:494] detecting cgroup driver to use...
	I0531 19:06:47.841083   42953 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:06:47.857172   42953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:06:47.870612   42953 docker.go:217] disabling cri-docker service (if available) ...
	I0531 19:06:47.870655   42953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:06:47.883514   42953 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:06:47.896434   42953 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:06:48.041355   42953 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:06:48.188309   42953 docker.go:233] disabling docker service ...
	I0531 19:06:48.188387   42953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:06:48.206186   42953 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:06:48.219529   42953 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:06:48.359789   42953 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:06:48.504832   42953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:06:48.519261   42953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:06:48.537606   42953 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:06:48.537655   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.548170   42953 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:06:48.548225   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.558561   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.568720   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.578919   42953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:06:48.589249   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.599356   42953 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.610126   42953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:06:48.620219   42953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:06:48.629410   42953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:06:48.638718   42953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:06:48.775372   42953 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:06:55.839370   42953 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.063960081s)
	I0531 19:06:55.839402   42953 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:06:55.839456   42953 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:06:55.844656   42953 start.go:562] Will wait 60s for crictl version
	I0531 19:06:55.844716   42953 ssh_runner.go:195] Run: which crictl
	I0531 19:06:55.848552   42953 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:06:55.890589   42953 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 19:06:55.890672   42953 ssh_runner.go:195] Run: crio --version
	I0531 19:06:55.919723   42953 ssh_runner.go:195] Run: crio --version
	I0531 19:06:55.950488   42953 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 19:06:55.952068   42953 main.go:141] libmachine: (ha-080809) Calling .GetIP
	I0531 19:06:55.954611   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:55.954934   42953 main.go:141] libmachine: (ha-080809) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:26:4d", ip: ""} in network mk-ha-080809: {Iface:virbr1 ExpiryTime:2024-05-31 19:41:03 +0000 UTC Type:0 Mac:52:54:00:07:26:4d Iaid: IPaddr:192.168.39.2 Prefix:24 Hostname:ha-080809 Clientid:01:52:54:00:07:26:4d}
	I0531 19:06:55.954971   42953 main.go:141] libmachine: (ha-080809) DBG | domain ha-080809 has defined IP address 192.168.39.2 and MAC address 52:54:00:07:26:4d in network mk-ha-080809
	I0531 19:06:55.955177   42953 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 19:06:55.960058   42953 kubeadm.go:877] updating cluster {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Cl
usterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 19:06:55.960195   42953 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:06:55.960236   42953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:06:56.005050   42953 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:06:56.005068   42953 crio.go:433] Images already preloaded, skipping extraction
	I0531 19:06:56.005118   42953 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:06:56.040470   42953 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:06:56.040488   42953 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:06:56.040501   42953 kubeadm.go:928] updating node { 192.168.39.2 8443 v1.30.1 crio true true} ...
	I0531 19:06:56.040583   42953 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-080809 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 19:06:56.040642   42953 ssh_runner.go:195] Run: crio config
	I0531 19:06:56.086792   42953 cni.go:84] Creating CNI manager for ""
	I0531 19:06:56.086813   42953 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0531 19:06:56.086821   42953 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 19:06:56.086841   42953 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-080809 NodeName:ha-080809 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:06:56.086980   42953 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-080809"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:06:56.086996   42953 kube-vip.go:115] generating kube-vip config ...
	I0531 19:06:56.087034   42953 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0531 19:06:56.100769   42953 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0531 19:06:56.100857   42953 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0531 19:06:56.100907   42953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 19:06:56.112303   42953 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:06:56.112354   42953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0531 19:06:56.123865   42953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0531 19:06:56.142116   42953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:06:56.159914   42953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2147 bytes)
	I0531 19:06:56.177772   42953 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0531 19:06:56.197290   42953 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0531 19:06:56.201597   42953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:06:56.349094   42953 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 19:06:56.364626   42953 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809 for IP: 192.168.39.2
	I0531 19:06:56.364644   42953 certs.go:194] generating shared ca certs ...
	I0531 19:06:56.364660   42953 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:06:56.364809   42953 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 19:06:56.364857   42953 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 19:06:56.364883   42953 certs.go:256] generating profile certs ...
	I0531 19:06:56.365018   42953 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/client.key
	I0531 19:06:56.365092   42953 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key.bcea5091
	I0531 19:06:56.365140   42953 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key
	I0531 19:06:56.365154   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 19:06:56.365182   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 19:06:56.365199   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 19:06:56.365215   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 19:06:56.365229   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 19:06:56.365256   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 19:06:56.365275   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 19:06:56.365293   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 19:06:56.365368   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 19:06:56.365407   42953 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 19:06:56.365420   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:06:56.365457   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 19:06:56.365488   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:06:56.365524   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 19:06:56.365578   42953 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:06:56.365611   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 19:06:56.365631   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 19:06:56.365648   42953 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:06:56.366267   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:06:56.389857   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:06:56.412315   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:06:56.434640   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:06:56.457113   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0531 19:06:56.479636   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:06:56.502355   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:06:56.525781   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/ha-080809/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:06:56.548991   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 19:06:56.580035   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 19:06:56.680077   42953 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:06:56.727782   42953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:06:56.770785   42953 ssh_runner.go:195] Run: openssl version
	I0531 19:06:56.786236   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 19:06:56.804516   42953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 19:06:56.814189   42953 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:06:56.814242   42953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 19:06:56.824144   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 19:06:56.855733   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 19:06:56.875030   42953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 19:06:56.880665   42953 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:06:56.880708   42953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 19:06:56.903708   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:06:56.930964   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:06:56.949631   42953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:06:56.958031   42953 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:06:56.958092   42953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:06:56.973007   42953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:06:56.994914   42953 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:06:57.012949   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 19:06:57.028340   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 19:06:57.040596   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 19:06:57.061342   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 19:06:57.069072   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 19:06:57.075599   42953 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 19:06:57.082955   42953 kubeadm.go:391] StartCluster: {Name:ha-080809 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 Clust
erName:ha-080809 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.51 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.132 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.40 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:
false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:06:57.083070   42953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:06:57.083135   42953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:06:57.144737   42953 cri.go:89] found id: "e3f8595fe9f29359f6dfa840fafa0b075678b8d65d5c05c9ae61c29de90607ec"
	I0531 19:06:57.144763   42953 cri.go:89] found id: "9f27d07587e1c2854bd0f804926b4febf80b436d2881c6a863f46e97523caddb"
	I0531 19:06:57.144769   42953 cri.go:89] found id: "94447cc9162a8662bc6d61523737ca3111d52b55860629945dfd662910906dc6"
	I0531 19:06:57.144774   42953 cri.go:89] found id: "cb38fb459f7a7987d722e88cc676f0d515254f0eca85a79ccc55418444def51f"
	I0531 19:06:57.144778   42953 cri.go:89] found id: "9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6"
	I0531 19:06:57.144782   42953 cri.go:89] found id: "e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a"
	I0531 19:06:57.144786   42953 cri.go:89] found id: "02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7"
	I0531 19:06:57.144790   42953 cri.go:89] found id: "2b4c89df16ec6055b6724b6dae8784996c313a70c1caba330d44afd4a7421a70"
	I0531 19:06:57.144794   42953 cri.go:89] found id: "255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4"
	I0531 19:06:57.144802   42953 cri.go:89] found id: "b3f030d4aefd7af0862533295ea5bc8c09c86a8d003c0f7c9c9083cda428178e"
	I0531 19:06:57.144809   42953 cri.go:89] found id: "6ced8b7504697fffee4ac8b191a59d6b7f210865b0427066cf44a64b3c692c89"
	I0531 19:06:57.144814   42953 cri.go:89] found id: "7d3a5d770ba93adf64579e4436df01691cb264594c97950931eee88bb0241208"
	I0531 19:06:57.144818   42953 cri.go:89] found id: "3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7"
	I0531 19:06:57.144823   42953 cri.go:89] found id: "6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783"
	I0531 19:06:57.144832   42953 cri.go:89] found id: "2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860"
	I0531 19:06:57.144839   42953 cri.go:89] found id: "817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3"
	I0531 19:06:57.144843   42953 cri.go:89] found id: ""
	I0531 19:06:57.144897   42953 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.943953954Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182649943932504,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=901520eb-35c0-4793-971a-01061c2b6fc2 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.944568963Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8bd99e78-e751-49fb-b2a5-52d8f87537b2 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.944619690Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8bd99e78-e751-49fb-b2a5-52d8f87537b2 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.945102157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56e920c2d4f16a5b9b73a403a8e9494c82c3dd8cad62e33a444e4c78dbb2554b,PodSandboxId:05dcad590ad22e1cd6c27225ed942151515615aaae193eb5359a855b8855d646,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717182507425364123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 6,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7063be895a852581fed4016fb4bb6811ec10b43392f56eb98f164a2ea11e2e9,PodSandboxId:8ec8c0bf5e00e1ec46c2f41d5f12ec41e02516470310eec1f219cdd09a1ca02e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:7,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717182482433022561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 7,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19e33872becda2d643575a3c6a61db54fd5399056c22760b193efffd7c2dc51,PodSandboxId:05dcad590ad22e1cd6c27225ed942151515615aaae193eb5359a855b8855d646,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182464420626134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1545802b3694c69c96e0b8f1376367c9978963d6b9b17b0a9b9cac8d209b8e85,PodSandboxId:b4b9852b234c4b371ee2278df28571eda0485f728f60f1db11e15c7d77a696b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717182454681323657,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0450ed2b1a52b370ad981a4d3ba36d35ee61447594e6aed8ff5848d54338b28,PodSandboxId:61ab68932a848db68dd1f130c42cc6745afb0a472154ce58304b170ee25531a8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717182421086235198,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb03ea5fef382c36feb31157f866d003b823bb3c70931b9e771dc9883e4aca79,PodSandboxId:a7c581d19adb1dc795a616688a00ce2d409a9ea89374d042b4d1532211f7d680,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717182421122953748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\
"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94048549b23fb0083ae1c94068c16ba77363384fa7dce322ee3587eb8be4a5,PodSandboxId:1aaf7e53b9b0c6c3eb4375a99744ebed6c6bb6287105db118a515e4dc1118e62,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717182421040500252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e37
13f42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3f8595fe9f29359f6dfa840fafa0b075678b8d65d5c05c9ae61c29de90607ec,PodSandboxId:3513cec2669bdd10ead572815cfb78cac6f0638db3808ea6be9e9fffe9eb89d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717182416876352873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f27d07587e1c2854bd0f804926b4febf80b436d2881c6a863f46e97523caddb,PodSandboxId:8ec8c0bf5e00e1ec46c2f41d5f12ec41e02516470310eec1f219cdd09a1ca02e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182416810703851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver
-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717181946918543224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c
4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1717181561462973889,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181545968227021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de
171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:17171815
44500518379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181544404189047,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097
150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8
fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8bd99e78-e751-49fb-b2a5-52d8f87537b2 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.987243518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8ff01a9-5841-4497-8ec8-e23369f6c7b0 name=/runtime.v1.RuntimeService/Version
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.987310645Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8ff01a9-5841-4497-8ec8-e23369f6c7b0 name=/runtime.v1.RuntimeService/Version
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.988411246Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ee8a0a60-0d78-49cf-b3cd-12eece69f3c2 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.988822635Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182649988802818,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ee8a0a60-0d78-49cf-b3cd-12eece69f3c2 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.989447046Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2075367a-15e2-4ec3-b490-66763fab0dff name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.989499288Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2075367a-15e2-4ec3-b490-66763fab0dff name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:49 ha-080809 crio[6401]: time="2024-05-31 19:10:49.989835922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56e920c2d4f16a5b9b73a403a8e9494c82c3dd8cad62e33a444e4c78dbb2554b,PodSandboxId:05dcad590ad22e1cd6c27225ed942151515615aaae193eb5359a855b8855d646,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717182507425364123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 6,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7063be895a852581fed4016fb4bb6811ec10b43392f56eb98f164a2ea11e2e9,PodSandboxId:8ec8c0bf5e00e1ec46c2f41d5f12ec41e02516470310eec1f219cdd09a1ca02e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:7,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717182482433022561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 7,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19e33872becda2d643575a3c6a61db54fd5399056c22760b193efffd7c2dc51,PodSandboxId:05dcad590ad22e1cd6c27225ed942151515615aaae193eb5359a855b8855d646,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182464420626134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1545802b3694c69c96e0b8f1376367c9978963d6b9b17b0a9b9cac8d209b8e85,PodSandboxId:b4b9852b234c4b371ee2278df28571eda0485f728f60f1db11e15c7d77a696b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717182454681323657,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0450ed2b1a52b370ad981a4d3ba36d35ee61447594e6aed8ff5848d54338b28,PodSandboxId:61ab68932a848db68dd1f130c42cc6745afb0a472154ce58304b170ee25531a8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717182421086235198,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb03ea5fef382c36feb31157f866d003b823bb3c70931b9e771dc9883e4aca79,PodSandboxId:a7c581d19adb1dc795a616688a00ce2d409a9ea89374d042b4d1532211f7d680,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717182421122953748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\
"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94048549b23fb0083ae1c94068c16ba77363384fa7dce322ee3587eb8be4a5,PodSandboxId:1aaf7e53b9b0c6c3eb4375a99744ebed6c6bb6287105db118a515e4dc1118e62,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717182421040500252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e37
13f42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3f8595fe9f29359f6dfa840fafa0b075678b8d65d5c05c9ae61c29de90607ec,PodSandboxId:3513cec2669bdd10ead572815cfb78cac6f0638db3808ea6be9e9fffe9eb89d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717182416876352873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f27d07587e1c2854bd0f804926b4febf80b436d2881c6a863f46e97523caddb,PodSandboxId:8ec8c0bf5e00e1ec46c2f41d5f12ec41e02516470310eec1f219cdd09a1ca02e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182416810703851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver
-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717181946918543224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c
4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1717181561462973889,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181545968227021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de
171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:17171815
44500518379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181544404189047,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097
150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8
fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2075367a-15e2-4ec3-b490-66763fab0dff name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.028459163Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=96b4289d-38f8-4007-844b-b8e41faa050a name=/runtime.v1.RuntimeService/Version
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.028550299Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=96b4289d-38f8-4007-844b-b8e41faa050a name=/runtime.v1.RuntimeService/Version
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.029754960Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=71a5cad6-b8cf-456f-aade-701bb0519509 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.030427639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182650030398268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=71a5cad6-b8cf-456f-aade-701bb0519509 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.031012178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1355b48-401b-4f72-9986-2596b54d0bb4 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.031085203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1355b48-401b-4f72-9986-2596b54d0bb4 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.031464604Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56e920c2d4f16a5b9b73a403a8e9494c82c3dd8cad62e33a444e4c78dbb2554b,PodSandboxId:05dcad590ad22e1cd6c27225ed942151515615aaae193eb5359a855b8855d646,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717182507425364123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 6,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7063be895a852581fed4016fb4bb6811ec10b43392f56eb98f164a2ea11e2e9,PodSandboxId:8ec8c0bf5e00e1ec46c2f41d5f12ec41e02516470310eec1f219cdd09a1ca02e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:7,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717182482433022561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 7,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19e33872becda2d643575a3c6a61db54fd5399056c22760b193efffd7c2dc51,PodSandboxId:05dcad590ad22e1cd6c27225ed942151515615aaae193eb5359a855b8855d646,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182464420626134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1545802b3694c69c96e0b8f1376367c9978963d6b9b17b0a9b9cac8d209b8e85,PodSandboxId:b4b9852b234c4b371ee2278df28571eda0485f728f60f1db11e15c7d77a696b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717182454681323657,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0450ed2b1a52b370ad981a4d3ba36d35ee61447594e6aed8ff5848d54338b28,PodSandboxId:61ab68932a848db68dd1f130c42cc6745afb0a472154ce58304b170ee25531a8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717182421086235198,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb03ea5fef382c36feb31157f866d003b823bb3c70931b9e771dc9883e4aca79,PodSandboxId:a7c581d19adb1dc795a616688a00ce2d409a9ea89374d042b4d1532211f7d680,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717182421122953748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\
"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94048549b23fb0083ae1c94068c16ba77363384fa7dce322ee3587eb8be4a5,PodSandboxId:1aaf7e53b9b0c6c3eb4375a99744ebed6c6bb6287105db118a515e4dc1118e62,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717182421040500252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e37
13f42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3f8595fe9f29359f6dfa840fafa0b075678b8d65d5c05c9ae61c29de90607ec,PodSandboxId:3513cec2669bdd10ead572815cfb78cac6f0638db3808ea6be9e9fffe9eb89d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717182416876352873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f27d07587e1c2854bd0f804926b4febf80b436d2881c6a863f46e97523caddb,PodSandboxId:8ec8c0bf5e00e1ec46c2f41d5f12ec41e02516470310eec1f219cdd09a1ca02e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182416810703851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver
-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717181946918543224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c
4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1717181561462973889,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181545968227021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de
171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:17171815
44500518379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181544404189047,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097
150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8
fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1355b48-401b-4f72-9986-2596b54d0bb4 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.074186350Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e3a85dc-3604-48d8-b152-1233fdbf8079 name=/runtime.v1.RuntimeService/Version
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.074255615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e3a85dc-3604-48d8-b152-1233fdbf8079 name=/runtime.v1.RuntimeService/Version
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.075214021Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8c142a07-26e7-4a5b-8f6b-0eb4599af6f1 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.075634608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717182650075613774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154742,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8c142a07-26e7-4a5b-8f6b-0eb4599af6f1 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.076378798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc539560-4134-41fc-96bf-7f4fabeb31dd name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.076429924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc539560-4134-41fc-96bf-7f4fabeb31dd name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:10:50 ha-080809 crio[6401]: time="2024-05-31 19:10:50.076767315Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:56e920c2d4f16a5b9b73a403a8e9494c82c3dd8cad62e33a444e4c78dbb2554b,PodSandboxId:05dcad590ad22e1cd6c27225ed942151515615aaae193eb5359a855b8855d646,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:6,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717182507425364123,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 6,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7063be895a852581fed4016fb4bb6811ec10b43392f56eb98f164a2ea11e2e9,PodSandboxId:8ec8c0bf5e00e1ec46c2f41d5f12ec41e02516470310eec1f219cdd09a1ca02e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:7,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717182482433022561,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 7,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b19e33872becda2d643575a3c6a61db54fd5399056c22760b193efffd7c2dc51,PodSandboxId:05dcad590ad22e1cd6c27225ed942151515615aaae193eb5359a855b8855d646,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:5,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182464420626134,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b78e0f15ef2137b451056daa67a462ca,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 5,io.kubernetes.container
.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1545802b3694c69c96e0b8f1376367c9978963d6b9b17b0a9b9cac8d209b8e85,PodSandboxId:b4b9852b234c4b371ee2278df28571eda0485f728f60f1db11e15c7d77a696b4,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717182454681323657,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f0450ed2b1a52b370ad981a4d3ba36d35ee61447594e6aed8ff5848d54338b28,PodSandboxId:61ab68932a848db68dd1f130c42cc6745afb0a472154ce58304b170ee25531a8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1717182421086235198,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb03ea5fef382c36feb31157f866d003b823bb3c70931b9e771dc9883e4aca79,PodSandboxId:a7c581d19adb1dc795a616688a00ce2d409a9ea89374d042b4d1532211f7d680,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717182421122953748,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\
"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db94048549b23fb0083ae1c94068c16ba77363384fa7dce322ee3587eb8be4a5,PodSandboxId:1aaf7e53b9b0c6c3eb4375a99744ebed6c6bb6287105db118a515e4dc1118e62,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717182421040500252,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e37
13f42,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3f8595fe9f29359f6dfa840fafa0b075678b8d65d5c05c9ae61c29de90607ec,PodSandboxId:3513cec2669bdd10ead572815cfb78cac6f0638db3808ea6be9e9fffe9eb89d9,Metadata:&ContainerMetadata{Name:coredns,Attempt:3,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717182416876352873,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.p
orts: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f27d07587e1c2854bd0f804926b4febf80b436d2881c6a863f46e97523caddb,PodSandboxId:8ec8c0bf5e00e1ec46c2f41d5f12ec41e02516470310eec1f219cdd09a1ca02e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:6,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182416810703851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver
-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63f626d576ff021f9c6d97fcb391e443,},Annotations:map[string]string{io.kubernetes.container.hash: e2338f1c,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1290300ca030fcbc09041229f3138076405e9151443c1f4a36a586ec298b87a,PodSandboxId:33c3621daf2d9713c3c468e97c9a86b854d085065ca7e67e455ea7547b4d768b,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717181946918543224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c
4f-mjhlv,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 61a4c85e-dc75-49ee-82cc-90f06c8d28b2,},Annotations:map[string]string{io.kubernetes.container.hash: 6fa9fb8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f8ac9229e760d6d9937c3853ea9688047c2941638181eb2c1c6d210598cbbe6,PodSandboxId:205250b7981c71fa391805c05999f9614f5ba1b83c76b54181b3cae627fece56,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_EXITED,CreatedAt:1717181561462973889,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-080809,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 980b40354a0f55c9ad17d5a06e5769c9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a,PodSandboxId:86c8aa4ade84f282046f7a74c2727c4725edfc6620512f809fdd8f2668ce7e91,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181545968227021,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-kbkpk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1de
171b4-ac9e-42d1-be02-d585d0d01587,},Annotations:map[string]string{io.kubernetes.container.hash: ff4b1515,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7,PodSandboxId:841629e56b3258cdd4a2baa069bec950b7f1aba1169714e21a4cd5b33826f654,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:17171815
44500518379,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0533be9b0d7be6e58a747756dff0c7e6,},Annotations:map[string]string{io.kubernetes.container.hash: e3713f42,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4,PodSandboxId:581a95181abc3f3528b653a9ba0d138310f8c7aef1f47631cee5ebc3d02e9d2a,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717181544404189047,Labels:map[string]string{io.kubernete
s.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-cr2wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f90f810-88d7-443c-8537-1eb0c65417be,},Annotations:map[string]string{io.kubernetes.container.hash: cdec410f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7,PodSandboxId:2bdbf3a323266639dd133f9bfd38824818c5591d36c81c794db6774f74151027,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},User
SpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717181396364254946,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fzzfx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1f8aa910-a7fc-44cf-88ad-dbf14049ac1d,},Annotations:map[string]string{io.kubernetes.container.hash: e1782900,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ecd8583d0646897b0902430658de1bad8abc7ec1875bb8bd0d4d627f9b51783,PodSandboxId:8bd72a9b0e002c3d46c885ab2344a83b8ead71e48d95acd3e6412778568cfed7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHand
ler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717181333120016677,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11521630-b23f-4574-9eda-8d0e5d0b7e10,},Annotations:map[string]string{io.kubernetes.container.hash: 65a82782,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860,PodSandboxId:d23870754e4e838976fb53626892a2621ff79183f8f55e444f28e5360503c887,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097
150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717180908041450805,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmg2w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50600abe-9fae-41f8-8ff2-05c8407163c0,},Annotations:map[string]string{io.kubernetes.container.hash: 2c74c1f8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3,PodSandboxId:a1254b0478464dd18928001eaff7de8cefa549124a0074d56328a14b4700d64a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8
fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717180887210721624,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-080809,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 65d8377fef588f9e0e1afe5ebb475786,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc539560-4134-41fc-96bf-7f4fabeb31dd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	56e920c2d4f16       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      2 minutes ago       Running             kube-controller-manager   6                   05dcad590ad22       kube-controller-manager-ha-080809
	f7063be895a85       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      2 minutes ago       Running             kube-apiserver            7                   8ec8c0bf5e00e       kube-apiserver-ha-080809
	b19e33872becd       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Exited              kube-controller-manager   5                   05dcad590ad22       kube-controller-manager-ha-080809
	1545802b3694c       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   b4b9852b234c4       busybox-fc5497c4f-mjhlv
	eb03ea5fef382       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   3                   a7c581d19adb1       coredns-7db6d8ff4d-kbkpk
	f0450ed2b1a52       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      3 minutes ago       Running             kube-vip                  1                   61ab68932a848       kube-vip-ha-080809
	db94048549b23       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      3                   1aaf7e53b9b0c       etcd-ha-080809
	e3f8595fe9f29       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   3                   3513cec2669bd       coredns-7db6d8ff4d-cr2wg
	9f27d07587e1c       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Exited              kube-apiserver            6                   8ec8c0bf5e00e       kube-apiserver-ha-080809
	b1290300ca030       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Exited              busybox                   0                   33c3621daf2d9       busybox-fc5497c4f-mjhlv
	9f8ac9229e760       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      18 minutes ago      Exited              kube-vip                  0                   205250b7981c7       kube-vip-ha-080809
	e769a1e959901       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Exited              coredns                   2                   86c8aa4ade84f       coredns-7db6d8ff4d-kbkpk
	02b6c1aa0696a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      18 minutes ago      Exited              etcd                      2                   841629e56b325       etcd-ha-080809
	255424e20e7a6       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Exited              coredns                   2                   581a95181abc3       coredns-7db6d8ff4d-cr2wg
	3b39b3e6d2204       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      20 minutes ago      Exited              kindnet-cni               1                   2bdbf3a323266       kindnet-fzzfx
	6ecd8583d0646       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   8bd72a9b0e002       storage-provisioner
	2442a39b94718       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      29 minutes ago      Exited              kube-proxy                0                   d23870754e4e8       kube-proxy-dmg2w
	817682d6f214f       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      29 minutes ago      Exited              kube-scheduler            0                   a1254b0478464       kube-scheduler-ha-080809
	
	
	==> coredns [255424e20e7a6f7428e47f91efcd657e31ac049917b935720cd0ad7f057a44b4] <==
	Trace[2067191073]: ---"Objects listed" error:Unauthorized 12093ms (19:04:38.782)
	Trace[2067191073]: [12.093919623s] [12.093919623s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[2043929497]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:42.543) (total time: 10243ms):
	Trace[2043929497]: ---"Objects listed" error:Unauthorized 10243ms (19:04:52.786)
	Trace[2043929497]: [10.243512004s] [10.243512004s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3159": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3159": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1085576986]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:59.219) (total time: 10428ms):
	Trace[1085576986]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 10428ms (19:05:09.648)
	Trace[1085576986]: [10.428823823s] [10.428823823s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3152": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3152": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e3f8595fe9f29359f6dfa840fafa0b075678b8d65d5c05c9ae61c29de90607ec] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e769a1e959901d2988f058294b81df80e3bcd344f06fce36ebdcc5e6c6bbc55a] <==
	Trace[1845718398]: ---"Objects listed" error:Unauthorized 12758ms (19:04:45.766)
	Trace[1845718398]: [12.758297077s] [12.758297077s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[1938946690]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:33.340) (total time: 12426ms):
	Trace[1938946690]: ---"Objects listed" error:Unauthorized 12426ms (19:04:45.766)
	Trace[1938946690]: [12.426636611s] [12.426636611s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3184": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1407987040]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:52.472) (total time: 17174ms):
	Trace[1407987040]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3184": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 17174ms (19:05:09.647)
	Trace[1407987040]: [17.1747591s] [17.1747591s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=3184": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[2097501263]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:54.232) (total time: 15415ms):
	Trace[2097501263]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 15415ms (19:05:09.647)
	Trace[2097501263]: [15.415625285s] [15.415625285s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] plugin/kubernetes: Trace[1835612567]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:04:54.137) (total time: 15510ms):
	Trace[1835612567]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF 15510ms (19:05:09.647)
	Trace[1835612567]: [15.510206814s] [15.510206814s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=3193": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [eb03ea5fef382c36feb31157f866d003b823bb3c70931b9e771dc9883e4aca79] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-080809
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_31T18_41_34_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:41:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:10:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 19:07:43 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 19:07:43 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 19:07:43 +0000   Fri, 31 May 2024 18:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 19:07:43 +0000   Fri, 31 May 2024 18:41:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.2
	  Hostname:    ha-080809
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0eb1bd8d3c42441893aa8670274ffb1e
	  System UUID:                0eb1bd8d-3c42-4418-93aa-8670274ffb1e
	  Boot ID:                    a71cdc12-b2dc-410e-bd89-30705a38ac97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-mjhlv              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-cr2wg             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 coredns-7db6d8ff4d-kbkpk             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-ha-080809                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kindnet-fzzfx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      29m
	  kube-system                 kube-apiserver-ha-080809             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-ha-080809    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-dmg2w                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-ha-080809             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-vip-ha-080809                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 29m                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    29m                  kubelet          Node ha-080809 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     29m                  kubelet          Node ha-080809 status is now: NodeHasSufficientPID
	  Normal   Starting                 29m                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  29m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  29m                  kubelet          Node ha-080809 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           29m                  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   NodeReady                28m                  kubelet          Node ha-080809 status is now: NodeReady
	  Normal   RegisteredNode           27m                  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   RegisteredNode           26m                  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   RegisteredNode           17m                  node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Warning  ContainerGCFailed        4m17s (x3 over 19m)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m57s                node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	  Normal   RegisteredNode           2m11s                node-controller  Node ha-080809 event: Registered Node ha-080809 in Controller
	
	
	Name:               ha-080809-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_42_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:42:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:10:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 19:08:29 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 19:08:29 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 19:08:29 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 19:08:29 +0000   Fri, 31 May 2024 18:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.51
	  Hostname:    ha-080809-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 26fe5dde81c649a890a62a46837233ef
	  System UUID:                26fe5dde-81c6-49a8-90a6-2a46837233ef
	  Boot ID:                    ddd0967e-94fb-4920-b34d-8874a93b9ebb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dwnsz                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  default                     busybox-fc5497c4f-wqwrw                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 etcd-ha-080809-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-6jqp4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-080809-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-080809-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-wxq4h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-080809-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-080809-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 17m                    kube-proxy       
	  Normal  Starting                 28m                    kube-proxy       
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)      kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)      kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)      kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           27m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           26m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  NodeNotReady             24m                    node-controller  Node ha-080809-m02 status is now: NodeNotReady
	  Normal  Starting                 18m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)      kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)      kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)      kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           17m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           17m                    node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  Starting                 3m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m30s (x8 over 3m30s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m30s (x8 over 3m30s)  kubelet          Node ha-080809-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m30s (x7 over 3m30s)  kubelet          Node ha-080809-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m57s                  node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	  Normal  RegisteredNode           2m11s                  node-controller  Node ha-080809-m02 event: Registered Node ha-080809-m02 in Controller
	
	
	Name:               ha-080809-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_43_55_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:43:52 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 18:44:23 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.132
	  Hostname:    ha-080809-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 547b87f87b134b9798b7d615ab502227
	  System UUID:                547b87f8-7b13-4b97-98b7-d615ab502227
	  Boot ID:                    8e35dd93-e6c7-4f99-9f16-d89c3fb275fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-qtbxj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 etcd-ha-080809-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-qzfbh                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-080809-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-ha-080809-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-nzlz5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-080809-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-vip-ha-080809-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  NodeHasSufficientMemory  26m (x9 over 26m)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node ha-080809-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    26m (x7 over 26m)  kubelet          Node ha-080809-m03 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 26m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           26m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           26m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  NodeNotReady             16m                node-controller  Node ha-080809-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           2m57s              node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	  Normal  RegisteredNode           2m11s              node-controller  Node ha-080809-m03 event: Registered Node ha-080809-m03 in Controller
	
	
	Name:               ha-080809-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-080809-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=ha-080809
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T18_44_54_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 18:44:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-080809-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 18:48:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 18:45:24 +0000   Fri, 31 May 2024 18:53:57 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.40
	  Hostname:    ha-080809-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 4babfa9b2c834bb583245811edaf0c5d
	  System UUID:                4babfa9b-2c83-4bb5-8324-5811edaf0c5d
	  Boot ID:                    b6f38232-474d-40b0-b4c0-a2b12b8f03ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gtssc       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      25m
	  kube-system                 kube-proxy-mzhqp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    25m (x2 over 25m)  kubelet          Node ha-080809-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     25m (x2 over 25m)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  25m (x2 over 25m)  kubelet          Node ha-080809-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           25m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           25m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           25m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  NodeReady                25m                kubelet          Node ha-080809-m04 status is now: NodeReady
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           17m                node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  NodeNotReady             16m                node-controller  Node ha-080809-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           2m57s              node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	  Normal  RegisteredNode           2m11s              node-controller  Node ha-080809-m04 event: Registered Node ha-080809-m04 in Controller
	
	
	==> dmesg <==
	[  +0.063717] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.978929] systemd-fstab-generator[1356]: Ignoring "noauto" option for root device
	[  +0.075713] kauditd_printk_skb: 79 callbacks suppressed
	[ +14.762573] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.211184] kauditd_printk_skb: 38 callbacks suppressed
	[May31 18:42] kauditd_printk_skb: 26 callbacks suppressed
	[May31 18:50] systemd-fstab-generator[3962]: Ignoring "noauto" option for root device
	[  +0.279335] systemd-fstab-generator[4022]: Ignoring "noauto" option for root device
	[  +0.265342] systemd-fstab-generator[4055]: Ignoring "noauto" option for root device
	[  +0.167968] systemd-fstab-generator[4067]: Ignoring "noauto" option for root device
	[  +0.358424] systemd-fstab-generator[4095]: Ignoring "noauto" option for root device
	[May31 18:52] systemd-fstab-generator[4310]: Ignoring "noauto" option for root device
	[  +0.086879] kauditd_printk_skb: 183 callbacks suppressed
	[  +5.033045] kauditd_printk_skb: 66 callbacks suppressed
	[  +7.553563] kauditd_printk_skb: 6 callbacks suppressed
	[ +13.792945] kauditd_printk_skb: 1 callbacks suppressed
	[May31 19:06] systemd-fstab-generator[6318]: Ignoring "noauto" option for root device
	[  +0.149108] systemd-fstab-generator[6331]: Ignoring "noauto" option for root device
	[  +0.175965] systemd-fstab-generator[6345]: Ignoring "noauto" option for root device
	[  +0.147442] systemd-fstab-generator[6357]: Ignoring "noauto" option for root device
	[  +0.267581] systemd-fstab-generator[6385]: Ignoring "noauto" option for root device
	[  +7.577250] systemd-fstab-generator[6488]: Ignoring "noauto" option for root device
	[  +0.079452] kauditd_printk_skb: 100 callbacks suppressed
	[May31 19:07] kauditd_printk_skb: 76 callbacks suppressed
	[ +25.717097] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [02b6c1aa0696a8af7c4c761e056c3bcc4ac0eb1b1ff25c04dd1ac559e70419a7] <==
	{"level":"info","ts":"2024-05-31T19:05:16.274231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6c80de388e5020e8 [logterm: 3, index: 3768] sent MsgPreVote request to 2a8c31cde4a03aae at term 3"}
	{"level":"info","ts":"2024-05-31T19:05:16.534827Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-31T19:05:16.534949Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-080809","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"]}
	{"level":"warn","ts":"2024-05-31T19:05:16.535054Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:05:16.53508Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:05:16.538397Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:05:16.538425Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-31T19:05:16.538472Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6c80de388e5020e8","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-05-31T19:05:16.538683Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T19:05:16.538704Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T19:05:16.538735Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T19:05:16.53878Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T19:05:16.538827Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T19:05:16.538931Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T19:05:16.538948Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"29f55ddef6ecd05f"}
	{"level":"info","ts":"2024-05-31T19:05:16.538955Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T19:05:16.538963Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T19:05:16.538981Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T19:05:16.539049Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T19:05:16.539083Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T19:05:16.539108Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T19:05:16.539119Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"2a8c31cde4a03aae"}
	{"level":"info","ts":"2024-05-31T19:05:16.549253Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2024-05-31T19:05:16.549384Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.2:2380"}
	{"level":"info","ts":"2024-05-31T19:05:16.549411Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-080809","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.2:2380"],"advertise-client-urls":["https://192.168.39.2:2379"]}
	
	
	==> etcd [db94048549b23fb0083ae1c94068c16ba77363384fa7dce322ee3587eb8be4a5] <==
	{"level":"warn","ts":"2024-05-31T19:10:50.398076Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.434016Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.132:2380/version","remote-member-id":"2a8c31cde4a03aae","error":"Get \"https://192.168.39.132:2380/version\": dial tcp 192.168.39.132:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-31T19:10:50.434092Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"2a8c31cde4a03aae","error":"Get \"https://192.168.39.132:2380/version\": dial tcp 192.168.39.132:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-31T19:10:50.441221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.457639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.469978Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.473928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.490928Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.504203Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.517819Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.523424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.527094Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.539072Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.542126Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.549387Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.560484Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.563672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.566989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.574562Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.6091Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.618096Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.627302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.641775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.673058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-31T19:10:50.675329Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6c80de388e5020e8","from":"6c80de388e5020e8","remote-peer-id":"2a8c31cde4a03aae","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:10:50 up 29 min,  0 users,  load average: 0.33, 0.39, 0.37
	Linux ha-080809 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3b39b3e6d22047597c20b3d9663d2069e6483825d083a910eb8e2fa1b77904b7] <==
	I0531 18:49:56.699488       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0531 18:49:56.699638       1 main.go:107] hostIP = 192.168.39.2
	podIP = 192.168.39.2
	I0531 18:49:56.699821       1 main.go:116] setting mtu 1500 for CNI 
	I0531 18:49:56.699938       1 main.go:146] kindnetd IP family: "ipv4"
	I0531 18:49:56.699982       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0531 18:50:05.446913       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0531 18:50:19.448089       1 main.go:191] Failed to get nodes, retrying after error: etcdserver: request timed out
	I0531 18:50:20.449642       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0531 18:50:38.351425       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0531 18:50:41.423495       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kube-apiserver [9f27d07587e1c2854bd0f804926b4febf80b436d2881c6a863f46e97523caddb] <==
	I0531 19:06:57.118025       1 options.go:221] external host was not specified, using 192.168.39.2
	I0531 19:06:57.119058       1 server.go:148] Version: v1.30.1
	I0531 19:06:57.119110       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0531 19:06:57.725214       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:06:57.726046       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0531 19:06:57.726140       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0531 19:06:57.727728       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0531 19:06:57.729219       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0531 19:06:57.729250       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0531 19:06:57.729389       1 instance.go:299] Using reconciler: lease
	W0531 19:06:57.730140       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:06:58.726718       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:06:58.726824       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:06:58.730410       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:07:00.433159       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:07:00.434579       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:07:00.521219       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:07:17.724461       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0531 19:07:17.725758       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0531 19:07:17.730307       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [f7063be895a852581fed4016fb4bb6811ec10b43392f56eb98f164a2ea11e2e9] <==
	I0531 19:08:04.245385       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0531 19:08:04.245426       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0531 19:08:04.245573       1 controller.go:116] Starting legacy_token_tracking_controller
	I0531 19:08:04.245599       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0531 19:08:04.245702       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0531 19:08:04.301145       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0531 19:08:04.305711       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0531 19:08:04.305825       1 policy_source.go:224] refreshing policies
	I0531 19:08:04.322747       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:08:04.344423       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:08:04.344466       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 19:08:04.345551       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0531 19:08:04.347212       1 shared_informer.go:320] Caches are synced for configmaps
	I0531 19:08:04.347299       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0531 19:08:04.347327       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0531 19:08:04.348230       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0531 19:08:04.349038       1 aggregator.go:165] initial CRD sync complete...
	I0531 19:08:04.349109       1 autoregister_controller.go:141] Starting autoregister controller
	I0531 19:08:04.349143       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0531 19:08:04.349167       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:08:04.355800       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0531 19:08:05.254955       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0531 19:08:05.678172       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.2 192.168.39.51]
	I0531 19:08:05.679563       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 19:08:05.686548       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [56e920c2d4f16a5b9b73a403a8e9494c82c3dd8cad62e33a444e4c78dbb2554b] <==
	I0531 19:08:39.127034       1 shared_informer.go:320] Caches are synced for ephemeral
	I0531 19:08:39.128777       1 shared_informer.go:320] Caches are synced for attach detach
	I0531 19:08:39.132660       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0531 19:08:39.136453       1 shared_informer.go:320] Caches are synced for GC
	I0531 19:08:39.148940       1 shared_informer.go:320] Caches are synced for taint
	I0531 19:08:39.149106       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0531 19:08:39.176929       1 shared_informer.go:320] Caches are synced for namespace
	I0531 19:08:39.182401       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-080809-m04"
	I0531 19:08:39.183144       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-080809"
	I0531 19:08:39.183184       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-080809-m02"
	I0531 19:08:39.183230       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-080809-m03"
	I0531 19:08:39.183828       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0531 19:08:39.184386       1 shared_informer.go:320] Caches are synced for service account
	I0531 19:08:39.319620       1 shared_informer.go:320] Caches are synced for resource quota
	I0531 19:08:39.350726       1 shared_informer.go:320] Caches are synced for resource quota
	I0531 19:08:39.356365       1 shared_informer.go:320] Caches are synced for disruption
	I0531 19:08:39.747703       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 19:08:39.793043       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 19:08:39.793091       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0531 19:08:53.149070       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-6t2hz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-6t2hz\": the object has been modified; please apply your changes to the latest version and try again"
	I0531 19:08:53.149272       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"54a51294-0309-47fb-80d7-352a83127b17", APIVersion:"v1", ResourceVersion:"244", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-6t2hz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-6t2hz": the object has been modified; please apply your changes to the latest version and try again
	I0531 19:08:53.162744       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.768876ms"
	I0531 19:08:53.162945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.578µs"
	I0531 19:09:03.137983       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.27582ms"
	I0531 19:09:03.138122       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="81.605µs"
	
	
	==> kube-controller-manager [b19e33872becda2d643575a3c6a61db54fd5399056c22760b193efffd7c2dc51] <==
	I0531 19:07:44.828524       1 serving.go:380] Generated self-signed cert in-memory
	I0531 19:07:45.091518       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0531 19:07:45.091597       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:07:45.093145       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 19:07:45.093312       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 19:07:45.093379       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0531 19:07:45.093504       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0531 19:07:55.096145       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.2:8443/healthz\": dial tcp 192.168.39.2:8443: connect: connection refused"
	
	
	==> kube-proxy [2442a39b94718fdf446fc97d71f34e26f7e7bdd9844532372ebed18babc83860] <==
	E0531 18:49:37.041507       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112493       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112526       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:40.112597       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:40.112627       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.257750       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257605       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.257924       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:46.257683       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:46.258008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:55.471454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:55.471631       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:58.544241       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:58.544552       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:49:58.544816       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:49:58.545008       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:16.977062       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:16.977207       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1861": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:23.119757       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:23.120054       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-080809&resourceVersion=1760": dial tcp 192.168.39.254:8443: connect: no route to host
	W0531 18:50:26.191520       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	E0531 18:50:26.191634       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1783": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [817682d6f214f8cd4fd6a237ddc2c3bcb5cbc52fbe61a41818c49fc919b23ef3] <==
	W0531 18:50:37.956048       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:37.956139       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:38.789958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 18:50:38.790048       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 18:50:39.380066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 18:50:39.380145       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 18:50:39.438437       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:39.438537       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:39.864506       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 18:50:39.864612       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 18:50:39.954574       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:39.954665       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:40.130549       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0531 18:50:40.130602       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0531 18:50:40.173351       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 18:50:40.173447       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 18:50:40.275786       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 18:50:40.275960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 18:50:40.403575       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 18:50:40.403680       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 18:50:40.405616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 18:50:40.405714       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 18:50:40.695638       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:50:40.695697       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 18:50:41.623659       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 31 19:10:32 ha-080809 kubelet[1363]: E0531 19:10:32.418188    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists"
	May 31 19:10:32 ha-080809 kubelet[1363]: E0531 19:10:32.418268    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:10:32 ha-080809 kubelet[1363]: E0531 19:10:32.418301    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:10:32 ha-080809 kubelet[1363]: E0531 19:10:32.418362    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\\\" already exists\"" pod="kube-system/kindnet-fzzfx" podUID="1f8aa910-a7fc-44cf-88ad-dbf14049ac1d"
	May 31 19:10:33 ha-080809 kubelet[1363]: E0531 19:10:33.428359    1363 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 19:10:33 ha-080809 kubelet[1363]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 19:10:33 ha-080809 kubelet[1363]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 19:10:33 ha-080809 kubelet[1363]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 19:10:33 ha-080809 kubelet[1363]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 19:10:39 ha-080809 kubelet[1363]: E0531 19:10:39.419906    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists"
	May 31 19:10:39 ha-080809 kubelet[1363]: E0531 19:10:39.419978    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists" pod="kube-system/kube-proxy-dmg2w"
	May 31 19:10:39 ha-080809 kubelet[1363]: E0531 19:10:39.419999    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\" already exists" pod="kube-system/kube-proxy-dmg2w"
	May 31 19:10:39 ha-080809 kubelet[1363]: E0531 19:10:39.420035    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-proxy-dmg2w_kube-system(50600abe-9fae-41f8-8ff2-05c8407163c0)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-proxy-dmg2w_kube-system_50600abe-9fae-41f8-8ff2-05c8407163c0_1\\\" already exists\"" pod="kube-system/kube-proxy-dmg2w" podUID="50600abe-9fae-41f8-8ff2-05c8407163c0"
	May 31 19:10:40 ha-080809 kubelet[1363]: E0531 19:10:40.419581    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists"
	May 31 19:10:40 ha-080809 kubelet[1363]: E0531 19:10:40.419718    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:10:40 ha-080809 kubelet[1363]: E0531 19:10:40.419743    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\" already exists" pod="kube-system/kube-scheduler-ha-080809"
	May 31 19:10:40 ha-080809 kubelet[1363]: E0531 19:10:40.419828    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-scheduler-ha-080809_kube-system(65d8377fef588f9e0e1afe5ebb475786)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kube-scheduler-ha-080809_kube-system_65d8377fef588f9e0e1afe5ebb475786_1\\\" already exists\"" pod="kube-system/kube-scheduler-ha-080809" podUID="65d8377fef588f9e0e1afe5ebb475786"
	May 31 19:10:42 ha-080809 kubelet[1363]: E0531 19:10:42.417366    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists"
	May 31 19:10:42 ha-080809 kubelet[1363]: E0531 19:10:42.417950    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:10:42 ha-080809 kubelet[1363]: E0531 19:10:42.418040    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\" already exists" pod="kube-system/storage-provisioner"
	May 31 19:10:42 ha-080809 kubelet[1363]: E0531 19:10:42.418186    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"storage-provisioner_kube-system(11521630-b23f-4574-9eda-8d0e5d0b7e10)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_storage-provisioner_kube-system_11521630-b23f-4574-9eda-8d0e5d0b7e10_1\\\" already exists\"" pod="kube-system/storage-provisioner" podUID="11521630-b23f-4574-9eda-8d0e5d0b7e10"
	May 31 19:10:45 ha-080809 kubelet[1363]: E0531 19:10:45.421604    1363 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists"
	May 31 19:10:45 ha-080809 kubelet[1363]: E0531 19:10:45.422043    1363 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:10:45 ha-080809 kubelet[1363]: E0531 19:10:45.422290    1363 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = pod sandbox with name \"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\" already exists" pod="kube-system/kindnet-fzzfx"
	May 31 19:10:45 ha-080809 kubelet[1363]: E0531 19:10:45.422388    1363 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kindnet-fzzfx_kube-system(1f8aa910-a7fc-44cf-88ad-dbf14049ac1d)\\\": rpc error: code = Unknown desc = pod sandbox with name \\\"k8s_kindnet-fzzfx_kube-system_1f8aa910-a7fc-44cf-88ad-dbf14049ac1d_1\\\" already exists\"" pod="kube-system/kindnet-fzzfx" podUID="1f8aa910-a7fc-44cf-88ad-dbf14049ac1d"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:10:49.643283   44274 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18995-11107/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-080809 -n ha-080809
helpers_test.go:261: (dbg) Run:  kubectl --context ha-080809 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (335.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (303.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-988083
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-988083
E0531 19:18:01.964120   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-988083: exit status 82 (2m1.957632639s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-988083-m03"  ...
	* Stopping node "multinode-988083-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-988083" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-988083 --wait=true -v=8 --alsologtostderr
E0531 19:19:58.917153   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-988083 --wait=true -v=8 --alsologtostderr: (2m58.964105472s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-988083
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-988083 -n multinode-988083
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-988083 logs -n 25: (1.4382758s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m02:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1280635764/001/cp-test_multinode-988083-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m02:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083:/home/docker/cp-test_multinode-988083-m02_multinode-988083.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n multinode-988083 sudo cat                                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /home/docker/cp-test_multinode-988083-m02_multinode-988083.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m02:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03:/home/docker/cp-test_multinode-988083-m02_multinode-988083-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n multinode-988083-m03 sudo cat                                   | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /home/docker/cp-test_multinode-988083-m02_multinode-988083-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp testdata/cp-test.txt                                                | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1280635764/001/cp-test_multinode-988083-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083:/home/docker/cp-test_multinode-988083-m03_multinode-988083.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n multinode-988083 sudo cat                                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /home/docker/cp-test_multinode-988083-m03_multinode-988083.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02:/home/docker/cp-test_multinode-988083-m03_multinode-988083-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n multinode-988083-m02 sudo cat                                   | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /home/docker/cp-test_multinode-988083-m03_multinode-988083-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-988083 node stop m03                                                          | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	| node    | multinode-988083 node start                                                             | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-988083                                                                | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC |                     |
	| stop    | -p multinode-988083                                                                     | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC |                     |
	| start   | -p multinode-988083                                                                     | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:19 UTC | 31 May 24 19:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-988083                                                                | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 19:19:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:19:51.862236   51569 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:19:51.862479   51569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:19:51.862488   51569 out.go:304] Setting ErrFile to fd 2...
	I0531 19:19:51.862492   51569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:19:51.862711   51569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:19:51.863263   51569 out.go:298] Setting JSON to false
	I0531 19:19:51.864194   51569 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7338,"bootTime":1717175854,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:19:51.864251   51569 start.go:139] virtualization: kvm guest
	I0531 19:19:51.866502   51569 out.go:177] * [multinode-988083] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:19:51.867987   51569 notify.go:220] Checking for updates...
	I0531 19:19:51.868011   51569 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 19:19:51.869477   51569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:19:51.870715   51569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 19:19:51.872026   51569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:19:51.873133   51569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:19:51.874256   51569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:19:51.875792   51569 config.go:182] Loaded profile config "multinode-988083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:19:51.875881   51569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 19:19:51.876295   51569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:19:51.876339   51569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:19:51.890868   51569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0531 19:19:51.891183   51569 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:19:51.891671   51569 main.go:141] libmachine: Using API Version  1
	I0531 19:19:51.891707   51569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:19:51.892085   51569 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:19:51.892269   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:19:51.927015   51569 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 19:19:51.928133   51569 start.go:297] selected driver: kvm2
	I0531 19:19:51.928149   51569 start.go:901] validating driver "kvm2" against &{Name:multinode-988083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-988083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:19:51.928292   51569 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:19:51.928592   51569 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:19:51.928656   51569 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 19:19:51.943456   51569 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 19:19:51.944086   51569 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:19:51.944112   51569 cni.go:84] Creating CNI manager for ""
	I0531 19:19:51.944119   51569 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0531 19:19:51.944162   51569 start.go:340] cluster config:
	{Name:multinode-988083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-988083 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:19:51.944282   51569 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:19:51.946548   51569 out.go:177] * Starting "multinode-988083" primary control-plane node in "multinode-988083" cluster
	I0531 19:19:51.947559   51569 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:19:51.947583   51569 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 19:19:51.947592   51569 cache.go:56] Caching tarball of preloaded images
	I0531 19:19:51.947662   51569 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 19:19:51.947672   51569 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 19:19:51.947773   51569 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/config.json ...
	I0531 19:19:51.947942   51569 start.go:360] acquireMachinesLock for multinode-988083: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 19:19:51.947975   51569 start.go:364] duration metric: took 18.346µs to acquireMachinesLock for "multinode-988083"
	I0531 19:19:51.947988   51569 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:19:51.947995   51569 fix.go:54] fixHost starting: 
	I0531 19:19:51.948234   51569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:19:51.948259   51569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:19:51.961971   51569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0531 19:19:51.962540   51569 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:19:51.963163   51569 main.go:141] libmachine: Using API Version  1
	I0531 19:19:51.963196   51569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:19:51.963535   51569 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:19:51.963711   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:19:51.963859   51569 main.go:141] libmachine: (multinode-988083) Calling .GetState
	I0531 19:19:51.965438   51569 fix.go:112] recreateIfNeeded on multinode-988083: state=Running err=<nil>
	W0531 19:19:51.965467   51569 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 19:19:51.967215   51569 out.go:177] * Updating the running kvm2 "multinode-988083" VM ...
	I0531 19:19:51.968306   51569 machine.go:94] provisionDockerMachine start ...
	I0531 19:19:51.968332   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:19:51.968498   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:51.971365   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:51.971876   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:51.971906   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:51.972091   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:51.972251   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:51.972428   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:51.972556   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:51.972708   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:19:51.972918   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:19:51.972930   51569 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 19:19:52.074296   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-988083
	
	I0531 19:19:52.074337   51569 main.go:141] libmachine: (multinode-988083) Calling .GetMachineName
	I0531 19:19:52.074579   51569 buildroot.go:166] provisioning hostname "multinode-988083"
	I0531 19:19:52.074607   51569 main.go:141] libmachine: (multinode-988083) Calling .GetMachineName
	I0531 19:19:52.074813   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.077512   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.077869   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.077899   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.078106   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:52.078283   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.078424   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.078557   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:52.078721   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:19:52.078911   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:19:52.078927   51569 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-988083 && echo "multinode-988083" | sudo tee /etc/hostname
	I0531 19:19:52.210803   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-988083
	
	I0531 19:19:52.210833   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.213438   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.213778   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.213808   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.214015   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:52.214197   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.214322   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.214434   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:52.214545   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:19:52.214749   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:19:52.214782   51569 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-988083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-988083/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-988083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:19:52.317810   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:19:52.317837   51569 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 19:19:52.317861   51569 buildroot.go:174] setting up certificates
	I0531 19:19:52.317872   51569 provision.go:84] configureAuth start
	I0531 19:19:52.317880   51569 main.go:141] libmachine: (multinode-988083) Calling .GetMachineName
	I0531 19:19:52.318122   51569 main.go:141] libmachine: (multinode-988083) Calling .GetIP
	I0531 19:19:52.320330   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.320665   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.320694   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.320823   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.322754   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.323060   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.323084   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.323225   51569 provision.go:143] copyHostCerts
	I0531 19:19:52.323250   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:19:52.323288   51569 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 19:19:52.323300   51569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:19:52.323366   51569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 19:19:52.323434   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:19:52.323451   51569 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 19:19:52.323458   51569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:19:52.323482   51569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 19:19:52.323520   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:19:52.323540   51569 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 19:19:52.323546   51569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:19:52.323568   51569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 19:19:52.323610   51569 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.multinode-988083 san=[127.0.0.1 192.168.39.78 localhost minikube multinode-988083]
	I0531 19:19:52.519258   51569 provision.go:177] copyRemoteCerts
	I0531 19:19:52.519319   51569 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:19:52.519338   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.521780   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.522137   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.522153   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.522344   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:52.522524   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.522693   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:52.522816   51569 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:19:52.600304   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 19:19:52.600377   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:19:52.625245   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 19:19:52.625319   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 19:19:52.650633   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 19:19:52.650708   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0531 19:19:52.677213   51569 provision.go:87] duration metric: took 359.330813ms to configureAuth
	I0531 19:19:52.677239   51569 buildroot.go:189] setting minikube options for container-runtime
	I0531 19:19:52.677480   51569 config.go:182] Loaded profile config "multinode-988083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:19:52.677570   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.679874   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.680284   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.680308   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.680432   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:52.680630   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.680811   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.680989   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:52.681152   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:19:52.681338   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:19:52.681354   51569 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:21:23.353031   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:21:23.353056   51569 machine.go:97] duration metric: took 1m31.384732212s to provisionDockerMachine
	I0531 19:21:23.353078   51569 start.go:293] postStartSetup for "multinode-988083" (driver="kvm2")
	I0531 19:21:23.353088   51569 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:21:23.353103   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.353456   51569 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:21:23.353492   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:21:23.356439   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.356825   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.356845   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.357037   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:21:23.357222   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.357379   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:21:23.357519   51569 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:21:23.436575   51569 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:21:23.441027   51569 command_runner.go:130] > NAME=Buildroot
	I0531 19:21:23.441042   51569 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0531 19:21:23.441046   51569 command_runner.go:130] > ID=buildroot
	I0531 19:21:23.441050   51569 command_runner.go:130] > VERSION_ID=2023.02.9
	I0531 19:21:23.441056   51569 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0531 19:21:23.441144   51569 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 19:21:23.441162   51569 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 19:21:23.441230   51569 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 19:21:23.441322   51569 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 19:21:23.441334   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 19:21:23.441458   51569 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:21:23.451091   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:21:23.476593   51569 start.go:296] duration metric: took 123.500193ms for postStartSetup
	I0531 19:21:23.476635   51569 fix.go:56] duration metric: took 1m31.52863973s for fixHost
	I0531 19:21:23.476654   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:21:23.479394   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.479698   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.479727   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.479858   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:21:23.480120   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.480318   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.480471   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:21:23.480661   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:21:23.480850   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:21:23.480863   51569 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 19:21:23.578468   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717183283.555209718
	
	I0531 19:21:23.578492   51569 fix.go:216] guest clock: 1717183283.555209718
	I0531 19:21:23.578501   51569 fix.go:229] Guest: 2024-05-31 19:21:23.555209718 +0000 UTC Remote: 2024-05-31 19:21:23.476638885 +0000 UTC m=+91.647733565 (delta=78.570833ms)
	I0531 19:21:23.578527   51569 fix.go:200] guest clock delta is within tolerance: 78.570833ms
	I0531 19:21:23.578534   51569 start.go:83] releasing machines lock for "multinode-988083", held for 1m31.630549811s
	I0531 19:21:23.578559   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.578799   51569 main.go:141] libmachine: (multinode-988083) Calling .GetIP
	I0531 19:21:23.581382   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.581688   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.581715   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.581790   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.582264   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.582407   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.582490   51569 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:21:23.582537   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:21:23.582672   51569 ssh_runner.go:195] Run: cat /version.json
	I0531 19:21:23.582696   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:21:23.585342   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.585372   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.585746   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.585774   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.585802   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.585822   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.585974   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:21:23.586051   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:21:23.586128   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.586194   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.586262   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:21:23.586403   51569 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:21:23.586467   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:21:23.586636   51569 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:21:23.658232   51569 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0531 19:21:23.658372   51569 ssh_runner.go:195] Run: systemctl --version
	I0531 19:21:23.683943   51569 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0531 19:21:23.684730   51569 command_runner.go:130] > systemd 252 (252)
	I0531 19:21:23.684772   51569 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0531 19:21:23.684837   51569 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:21:23.843965   51569 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:21:23.851905   51569 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0531 19:21:23.852343   51569 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 19:21:23.852405   51569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:21:23.861639   51569 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 19:21:23.861665   51569 start.go:494] detecting cgroup driver to use...
	I0531 19:21:23.861722   51569 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:21:23.877514   51569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:21:23.892194   51569 docker.go:217] disabling cri-docker service (if available) ...
	I0531 19:21:23.892251   51569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:21:23.905850   51569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:21:23.919428   51569 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:21:24.063081   51569 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:21:24.221907   51569 docker.go:233] disabling docker service ...
	I0531 19:21:24.221988   51569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:21:24.242256   51569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:21:24.255597   51569 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:21:24.392350   51569 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:21:24.523889   51569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:21:24.538698   51569 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:21:24.557589   51569 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0531 19:21:24.557928   51569 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:21:24.557976   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.568875   51569 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:21:24.568935   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.579964   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.590420   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.600628   51569 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:21:24.611084   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.621115   51569 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.632138   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.642163   51569 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:21:24.651399   51569 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0531 19:21:24.651471   51569 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:21:24.660416   51569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:21:24.792332   51569 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:21:25.019616   51569 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:21:25.019674   51569 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:21:25.024711   51569 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0531 19:21:25.024734   51569 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0531 19:21:25.024741   51569 command_runner.go:130] > Device: 0,22	Inode: 1347        Links: 1
	I0531 19:21:25.024747   51569 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:21:25.024754   51569 command_runner.go:130] > Access: 2024-05-31 19:21:24.893811928 +0000
	I0531 19:21:25.024759   51569 command_runner.go:130] > Modify: 2024-05-31 19:21:24.893811928 +0000
	I0531 19:21:25.024763   51569 command_runner.go:130] > Change: 2024-05-31 19:21:24.893811928 +0000
	I0531 19:21:25.024767   51569 command_runner.go:130] >  Birth: -
	I0531 19:21:25.024782   51569 start.go:562] Will wait 60s for crictl version
	I0531 19:21:25.024812   51569 ssh_runner.go:195] Run: which crictl
	I0531 19:21:25.028520   51569 command_runner.go:130] > /usr/bin/crictl
	I0531 19:21:25.028638   51569 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:21:25.063450   51569 command_runner.go:130] > Version:  0.1.0
	I0531 19:21:25.063472   51569 command_runner.go:130] > RuntimeName:  cri-o
	I0531 19:21:25.063480   51569 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0531 19:21:25.063488   51569 command_runner.go:130] > RuntimeApiVersion:  v1
	I0531 19:21:25.064566   51569 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 19:21:25.064641   51569 ssh_runner.go:195] Run: crio --version
	I0531 19:21:25.091174   51569 command_runner.go:130] > crio version 1.29.1
	I0531 19:21:25.091192   51569 command_runner.go:130] > Version:        1.29.1
	I0531 19:21:25.091198   51569 command_runner.go:130] > GitCommit:      unknown
	I0531 19:21:25.091203   51569 command_runner.go:130] > GitCommitDate:  unknown
	I0531 19:21:25.091207   51569 command_runner.go:130] > GitTreeState:   clean
	I0531 19:21:25.091227   51569 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0531 19:21:25.091233   51569 command_runner.go:130] > GoVersion:      go1.21.6
	I0531 19:21:25.091239   51569 command_runner.go:130] > Compiler:       gc
	I0531 19:21:25.091247   51569 command_runner.go:130] > Platform:       linux/amd64
	I0531 19:21:25.091254   51569 command_runner.go:130] > Linkmode:       dynamic
	I0531 19:21:25.091264   51569 command_runner.go:130] > BuildTags:      
	I0531 19:21:25.091271   51569 command_runner.go:130] >   containers_image_ostree_stub
	I0531 19:21:25.091276   51569 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0531 19:21:25.091283   51569 command_runner.go:130] >   btrfs_noversion
	I0531 19:21:25.091287   51569 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0531 19:21:25.091291   51569 command_runner.go:130] >   libdm_no_deferred_remove
	I0531 19:21:25.091297   51569 command_runner.go:130] >   seccomp
	I0531 19:21:25.091301   51569 command_runner.go:130] > LDFlags:          unknown
	I0531 19:21:25.091308   51569 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:21:25.091312   51569 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:21:25.092428   51569 ssh_runner.go:195] Run: crio --version
	I0531 19:21:25.118285   51569 command_runner.go:130] > crio version 1.29.1
	I0531 19:21:25.118307   51569 command_runner.go:130] > Version:        1.29.1
	I0531 19:21:25.118314   51569 command_runner.go:130] > GitCommit:      unknown
	I0531 19:21:25.118321   51569 command_runner.go:130] > GitCommitDate:  unknown
	I0531 19:21:25.118327   51569 command_runner.go:130] > GitTreeState:   clean
	I0531 19:21:25.118336   51569 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0531 19:21:25.118342   51569 command_runner.go:130] > GoVersion:      go1.21.6
	I0531 19:21:25.118348   51569 command_runner.go:130] > Compiler:       gc
	I0531 19:21:25.118355   51569 command_runner.go:130] > Platform:       linux/amd64
	I0531 19:21:25.118371   51569 command_runner.go:130] > Linkmode:       dynamic
	I0531 19:21:25.118381   51569 command_runner.go:130] > BuildTags:      
	I0531 19:21:25.118387   51569 command_runner.go:130] >   containers_image_ostree_stub
	I0531 19:21:25.118393   51569 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0531 19:21:25.118403   51569 command_runner.go:130] >   btrfs_noversion
	I0531 19:21:25.118411   51569 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0531 19:21:25.118421   51569 command_runner.go:130] >   libdm_no_deferred_remove
	I0531 19:21:25.118426   51569 command_runner.go:130] >   seccomp
	I0531 19:21:25.118436   51569 command_runner.go:130] > LDFlags:          unknown
	I0531 19:21:25.118442   51569 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:21:25.118451   51569 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:21:25.122163   51569 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 19:21:25.123675   51569 main.go:141] libmachine: (multinode-988083) Calling .GetIP
	I0531 19:21:25.126244   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:25.126536   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:25.126575   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:25.126743   51569 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 19:21:25.130908   51569 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0531 19:21:25.131082   51569 kubeadm.go:877] updating cluster {Name:multinode-988083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-988083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 19:21:25.131245   51569 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:21:25.131313   51569 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:21:25.177009   51569 command_runner.go:130] > {
	I0531 19:21:25.177032   51569 command_runner.go:130] >   "images": [
	I0531 19:21:25.177038   51569 command_runner.go:130] >     {
	I0531 19:21:25.177048   51569 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0531 19:21:25.177054   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177062   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0531 19:21:25.177067   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177072   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177086   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0531 19:21:25.177099   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0531 19:21:25.177106   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177114   51569 command_runner.go:130] >       "size": "65291810",
	I0531 19:21:25.177121   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177128   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177137   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177148   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177154   51569 command_runner.go:130] >     },
	I0531 19:21:25.177160   51569 command_runner.go:130] >     {
	I0531 19:21:25.177171   51569 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0531 19:21:25.177180   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177189   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0531 19:21:25.177195   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177212   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177228   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0531 19:21:25.177244   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0531 19:21:25.177254   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177262   51569 command_runner.go:130] >       "size": "65908273",
	I0531 19:21:25.177270   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177282   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177293   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177302   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177308   51569 command_runner.go:130] >     },
	I0531 19:21:25.177315   51569 command_runner.go:130] >     {
	I0531 19:21:25.177327   51569 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0531 19:21:25.177336   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177346   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0531 19:21:25.177356   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177365   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177379   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0531 19:21:25.177394   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0531 19:21:25.177403   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177410   51569 command_runner.go:130] >       "size": "1363676",
	I0531 19:21:25.177419   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177427   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177436   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177445   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177454   51569 command_runner.go:130] >     },
	I0531 19:21:25.177460   51569 command_runner.go:130] >     {
	I0531 19:21:25.177474   51569 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0531 19:21:25.177483   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177493   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 19:21:25.177502   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177511   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177525   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0531 19:21:25.177549   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0531 19:21:25.177558   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177565   51569 command_runner.go:130] >       "size": "31470524",
	I0531 19:21:25.177596   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177618   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177628   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177637   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177643   51569 command_runner.go:130] >     },
	I0531 19:21:25.177652   51569 command_runner.go:130] >     {
	I0531 19:21:25.177663   51569 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0531 19:21:25.177673   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177682   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0531 19:21:25.177691   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177699   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177715   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0531 19:21:25.177730   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0531 19:21:25.177740   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177749   51569 command_runner.go:130] >       "size": "61245718",
	I0531 19:21:25.177759   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177766   51569 command_runner.go:130] >       "username": "nonroot",
	I0531 19:21:25.177776   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177784   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177792   51569 command_runner.go:130] >     },
	I0531 19:21:25.177799   51569 command_runner.go:130] >     {
	I0531 19:21:25.177813   51569 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0531 19:21:25.177823   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177834   51569 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0531 19:21:25.177843   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177850   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177865   51569 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0531 19:21:25.177879   51569 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0531 19:21:25.177892   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177905   51569 command_runner.go:130] >       "size": "150779692",
	I0531 19:21:25.177918   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.177925   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.177938   51569 command_runner.go:130] >       },
	I0531 19:21:25.177943   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177949   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177955   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177961   51569 command_runner.go:130] >     },
	I0531 19:21:25.177973   51569 command_runner.go:130] >     {
	I0531 19:21:25.177983   51569 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0531 19:21:25.177991   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178000   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0531 19:21:25.178008   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178012   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178020   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0531 19:21:25.178030   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0531 19:21:25.178035   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178040   51569 command_runner.go:130] >       "size": "117601759",
	I0531 19:21:25.178046   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.178050   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.178055   51569 command_runner.go:130] >       },
	I0531 19:21:25.178060   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178070   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178076   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.178084   51569 command_runner.go:130] >     },
	I0531 19:21:25.178093   51569 command_runner.go:130] >     {
	I0531 19:21:25.178106   51569 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0531 19:21:25.178114   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178125   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0531 19:21:25.178131   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178142   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178173   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0531 19:21:25.178190   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0531 19:21:25.178197   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178204   51569 command_runner.go:130] >       "size": "112170310",
	I0531 19:21:25.178214   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.178223   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.178232   51569 command_runner.go:130] >       },
	I0531 19:21:25.178238   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178248   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178255   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.178260   51569 command_runner.go:130] >     },
	I0531 19:21:25.178266   51569 command_runner.go:130] >     {
	I0531 19:21:25.178275   51569 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0531 19:21:25.178287   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178295   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0531 19:21:25.178305   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178312   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178327   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0531 19:21:25.178340   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0531 19:21:25.178346   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178353   51569 command_runner.go:130] >       "size": "85933465",
	I0531 19:21:25.178363   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.178373   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178379   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178389   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.178398   51569 command_runner.go:130] >     },
	I0531 19:21:25.178406   51569 command_runner.go:130] >     {
	I0531 19:21:25.178419   51569 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0531 19:21:25.178427   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178438   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0531 19:21:25.178447   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178465   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178480   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0531 19:21:25.178496   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0531 19:21:25.178504   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178514   51569 command_runner.go:130] >       "size": "63026504",
	I0531 19:21:25.178523   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.178532   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.178541   51569 command_runner.go:130] >       },
	I0531 19:21:25.178548   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178559   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178569   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.178578   51569 command_runner.go:130] >     },
	I0531 19:21:25.178588   51569 command_runner.go:130] >     {
	I0531 19:21:25.178600   51569 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0531 19:21:25.178610   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178620   51569 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0531 19:21:25.178628   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178635   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178651   51569 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0531 19:21:25.178666   51569 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0531 19:21:25.178675   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178690   51569 command_runner.go:130] >       "size": "750414",
	I0531 19:21:25.178699   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.178709   51569 command_runner.go:130] >         "value": "65535"
	I0531 19:21:25.178718   51569 command_runner.go:130] >       },
	I0531 19:21:25.178726   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178733   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178737   51569 command_runner.go:130] >       "pinned": true
	I0531 19:21:25.178745   51569 command_runner.go:130] >     }
	I0531 19:21:25.178755   51569 command_runner.go:130] >   ]
	I0531 19:21:25.178760   51569 command_runner.go:130] > }
	I0531 19:21:25.179040   51569 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:21:25.179052   51569 crio.go:433] Images already preloaded, skipping extraction
	I0531 19:21:25.179102   51569 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:21:25.212965   51569 command_runner.go:130] > {
	I0531 19:21:25.212995   51569 command_runner.go:130] >   "images": [
	I0531 19:21:25.213001   51569 command_runner.go:130] >     {
	I0531 19:21:25.213013   51569 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0531 19:21:25.213019   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213025   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0531 19:21:25.213029   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213033   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213040   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0531 19:21:25.213048   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0531 19:21:25.213053   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213060   51569 command_runner.go:130] >       "size": "65291810",
	I0531 19:21:25.213066   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213075   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213089   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213100   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213105   51569 command_runner.go:130] >     },
	I0531 19:21:25.213111   51569 command_runner.go:130] >     {
	I0531 19:21:25.213120   51569 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0531 19:21:25.213129   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213137   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0531 19:21:25.213141   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213149   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213164   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0531 19:21:25.213177   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0531 19:21:25.213186   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213203   51569 command_runner.go:130] >       "size": "65908273",
	I0531 19:21:25.213213   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213225   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213233   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213237   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213245   51569 command_runner.go:130] >     },
	I0531 19:21:25.213253   51569 command_runner.go:130] >     {
	I0531 19:21:25.213267   51569 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0531 19:21:25.213277   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213288   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0531 19:21:25.213298   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213307   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213322   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0531 19:21:25.213335   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0531 19:21:25.213342   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213348   51569 command_runner.go:130] >       "size": "1363676",
	I0531 19:21:25.213358   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213368   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213375   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213385   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213394   51569 command_runner.go:130] >     },
	I0531 19:21:25.213402   51569 command_runner.go:130] >     {
	I0531 19:21:25.213414   51569 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0531 19:21:25.213424   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213436   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 19:21:25.213442   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213446   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213462   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0531 19:21:25.213487   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0531 19:21:25.213498   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213504   51569 command_runner.go:130] >       "size": "31470524",
	I0531 19:21:25.213511   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213520   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213529   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213537   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213542   51569 command_runner.go:130] >     },
	I0531 19:21:25.213554   51569 command_runner.go:130] >     {
	I0531 19:21:25.213568   51569 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0531 19:21:25.213577   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213586   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0531 19:21:25.213595   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213604   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213619   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0531 19:21:25.213634   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0531 19:21:25.213643   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213650   51569 command_runner.go:130] >       "size": "61245718",
	I0531 19:21:25.213654   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213663   51569 command_runner.go:130] >       "username": "nonroot",
	I0531 19:21:25.213673   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213683   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213692   51569 command_runner.go:130] >     },
	I0531 19:21:25.213701   51569 command_runner.go:130] >     {
	I0531 19:21:25.213714   51569 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0531 19:21:25.213723   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213734   51569 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0531 19:21:25.213749   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213759   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213774   51569 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0531 19:21:25.213788   51569 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0531 19:21:25.213797   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213808   51569 command_runner.go:130] >       "size": "150779692",
	I0531 19:21:25.213817   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.213826   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.213834   51569 command_runner.go:130] >       },
	I0531 19:21:25.213843   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213852   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213857   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213869   51569 command_runner.go:130] >     },
	I0531 19:21:25.213873   51569 command_runner.go:130] >     {
	I0531 19:21:25.213883   51569 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0531 19:21:25.213889   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213897   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0531 19:21:25.213909   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213916   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213934   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0531 19:21:25.213949   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0531 19:21:25.213958   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213965   51569 command_runner.go:130] >       "size": "117601759",
	I0531 19:21:25.213974   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.213981   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.213989   51569 command_runner.go:130] >       },
	I0531 19:21:25.213996   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214006   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214012   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.214018   51569 command_runner.go:130] >     },
	I0531 19:21:25.214024   51569 command_runner.go:130] >     {
	I0531 19:21:25.214037   51569 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0531 19:21:25.214046   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.214055   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0531 19:21:25.214062   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214066   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.214088   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0531 19:21:25.214098   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0531 19:21:25.214102   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214106   51569 command_runner.go:130] >       "size": "112170310",
	I0531 19:21:25.214112   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.214116   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.214121   51569 command_runner.go:130] >       },
	I0531 19:21:25.214128   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214134   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214138   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.214146   51569 command_runner.go:130] >     },
	I0531 19:21:25.214149   51569 command_runner.go:130] >     {
	I0531 19:21:25.214155   51569 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0531 19:21:25.214159   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.214164   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0531 19:21:25.214168   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214172   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.214186   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0531 19:21:25.214196   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0531 19:21:25.214200   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214204   51569 command_runner.go:130] >       "size": "85933465",
	I0531 19:21:25.214208   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.214212   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214218   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214222   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.214228   51569 command_runner.go:130] >     },
	I0531 19:21:25.214231   51569 command_runner.go:130] >     {
	I0531 19:21:25.214237   51569 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0531 19:21:25.214244   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.214249   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0531 19:21:25.214254   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214258   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.214265   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0531 19:21:25.214273   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0531 19:21:25.214277   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214282   51569 command_runner.go:130] >       "size": "63026504",
	I0531 19:21:25.214288   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.214291   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.214294   51569 command_runner.go:130] >       },
	I0531 19:21:25.214298   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214302   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214307   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.214311   51569 command_runner.go:130] >     },
	I0531 19:21:25.214316   51569 command_runner.go:130] >     {
	I0531 19:21:25.214322   51569 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0531 19:21:25.214327   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.214332   51569 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0531 19:21:25.214335   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214339   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.214346   51569 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0531 19:21:25.214355   51569 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0531 19:21:25.214358   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214364   51569 command_runner.go:130] >       "size": "750414",
	I0531 19:21:25.214376   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.214383   51569 command_runner.go:130] >         "value": "65535"
	I0531 19:21:25.214386   51569 command_runner.go:130] >       },
	I0531 19:21:25.214390   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214394   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214397   51569 command_runner.go:130] >       "pinned": true
	I0531 19:21:25.214401   51569 command_runner.go:130] >     }
	I0531 19:21:25.214404   51569 command_runner.go:130] >   ]
	I0531 19:21:25.214407   51569 command_runner.go:130] > }
	I0531 19:21:25.214536   51569 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:21:25.214546   51569 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:21:25.214552   51569 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.30.1 crio true true} ...
	I0531 19:21:25.214659   51569 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-988083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-988083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 19:21:25.214731   51569 ssh_runner.go:195] Run: crio config
	I0531 19:21:25.256986   51569 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0531 19:21:25.257018   51569 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0531 19:21:25.257028   51569 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0531 19:21:25.257033   51569 command_runner.go:130] > #
	I0531 19:21:25.257042   51569 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0531 19:21:25.257051   51569 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0531 19:21:25.257061   51569 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0531 19:21:25.257071   51569 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0531 19:21:25.257077   51569 command_runner.go:130] > # reload'.
	I0531 19:21:25.257095   51569 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0531 19:21:25.257111   51569 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0531 19:21:25.257121   51569 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0531 19:21:25.257127   51569 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0531 19:21:25.257133   51569 command_runner.go:130] > [crio]
	I0531 19:21:25.257147   51569 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0531 19:21:25.257152   51569 command_runner.go:130] > # containers images, in this directory.
	I0531 19:21:25.257162   51569 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0531 19:21:25.257184   51569 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0531 19:21:25.257337   51569 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0531 19:21:25.257351   51569 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0531 19:21:25.257535   51569 command_runner.go:130] > # imagestore = ""
	I0531 19:21:25.257545   51569 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0531 19:21:25.257551   51569 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0531 19:21:25.257778   51569 command_runner.go:130] > storage_driver = "overlay"
	I0531 19:21:25.257797   51569 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0531 19:21:25.257807   51569 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0531 19:21:25.257818   51569 command_runner.go:130] > storage_option = [
	I0531 19:21:25.257917   51569 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0531 19:21:25.257995   51569 command_runner.go:130] > ]
	I0531 19:21:25.258005   51569 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0531 19:21:25.258011   51569 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0531 19:21:25.258571   51569 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0531 19:21:25.258587   51569 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0531 19:21:25.258597   51569 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0531 19:21:25.258609   51569 command_runner.go:130] > # always happen on a node reboot
	I0531 19:21:25.258808   51569 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0531 19:21:25.258842   51569 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0531 19:21:25.258857   51569 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0531 19:21:25.258869   51569 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0531 19:21:25.259062   51569 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0531 19:21:25.259103   51569 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0531 19:21:25.259139   51569 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0531 19:21:25.259199   51569 command_runner.go:130] > # internal_wipe = true
	I0531 19:21:25.259216   51569 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0531 19:21:25.259226   51569 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0531 19:21:25.259658   51569 command_runner.go:130] > # internal_repair = false
	I0531 19:21:25.259670   51569 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0531 19:21:25.259679   51569 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0531 19:21:25.259688   51569 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0531 19:21:25.260084   51569 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0531 19:21:25.260098   51569 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0531 19:21:25.260104   51569 command_runner.go:130] > [crio.api]
	I0531 19:21:25.260114   51569 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0531 19:21:25.260491   51569 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0531 19:21:25.260503   51569 command_runner.go:130] > # IP address on which the stream server will listen.
	I0531 19:21:25.260791   51569 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0531 19:21:25.260806   51569 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0531 19:21:25.260812   51569 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0531 19:21:25.261121   51569 command_runner.go:130] > # stream_port = "0"
	I0531 19:21:25.261136   51569 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0531 19:21:25.261355   51569 command_runner.go:130] > # stream_enable_tls = false
	I0531 19:21:25.261368   51569 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0531 19:21:25.261607   51569 command_runner.go:130] > # stream_idle_timeout = ""
	I0531 19:21:25.261638   51569 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0531 19:21:25.261650   51569 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0531 19:21:25.261659   51569 command_runner.go:130] > # minutes.
	I0531 19:21:25.261827   51569 command_runner.go:130] > # stream_tls_cert = ""
	I0531 19:21:25.261843   51569 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0531 19:21:25.261852   51569 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0531 19:21:25.262096   51569 command_runner.go:130] > # stream_tls_key = ""
	I0531 19:21:25.262108   51569 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0531 19:21:25.262114   51569 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0531 19:21:25.262135   51569 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0531 19:21:25.262415   51569 command_runner.go:130] > # stream_tls_ca = ""
	I0531 19:21:25.262432   51569 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0531 19:21:25.262568   51569 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0531 19:21:25.262581   51569 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0531 19:21:25.262707   51569 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0531 19:21:25.262720   51569 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0531 19:21:25.262730   51569 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0531 19:21:25.262737   51569 command_runner.go:130] > [crio.runtime]
	I0531 19:21:25.262748   51569 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0531 19:21:25.262762   51569 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0531 19:21:25.262772   51569 command_runner.go:130] > # "nofile=1024:2048"
	I0531 19:21:25.262785   51569 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0531 19:21:25.262852   51569 command_runner.go:130] > # default_ulimits = [
	I0531 19:21:25.263007   51569 command_runner.go:130] > # ]
	I0531 19:21:25.263021   51569 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0531 19:21:25.263455   51569 command_runner.go:130] > # no_pivot = false
	I0531 19:21:25.263467   51569 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0531 19:21:25.263477   51569 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0531 19:21:25.263760   51569 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0531 19:21:25.263772   51569 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0531 19:21:25.263780   51569 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0531 19:21:25.263792   51569 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:21:25.263946   51569 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0531 19:21:25.263956   51569 command_runner.go:130] > # Cgroup setting for conmon
	I0531 19:21:25.263967   51569 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0531 19:21:25.264124   51569 command_runner.go:130] > conmon_cgroup = "pod"
	I0531 19:21:25.264138   51569 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0531 19:21:25.264143   51569 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0531 19:21:25.264151   51569 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:21:25.264157   51569 command_runner.go:130] > conmon_env = [
	I0531 19:21:25.264329   51569 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0531 19:21:25.264386   51569 command_runner.go:130] > ]
	I0531 19:21:25.264399   51569 command_runner.go:130] > # Additional environment variables to set for all the
	I0531 19:21:25.264411   51569 command_runner.go:130] > # containers. These are overridden if set in the
	I0531 19:21:25.264423   51569 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0531 19:21:25.264530   51569 command_runner.go:130] > # default_env = [
	I0531 19:21:25.264706   51569 command_runner.go:130] > # ]
	I0531 19:21:25.264725   51569 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0531 19:21:25.264738   51569 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0531 19:21:25.266350   51569 command_runner.go:130] > # selinux = false
	I0531 19:21:25.266365   51569 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0531 19:21:25.266374   51569 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0531 19:21:25.266383   51569 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0531 19:21:25.266390   51569 command_runner.go:130] > # seccomp_profile = ""
	I0531 19:21:25.266403   51569 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0531 19:21:25.266413   51569 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0531 19:21:25.266427   51569 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0531 19:21:25.266434   51569 command_runner.go:130] > # which might increase security.
	I0531 19:21:25.266442   51569 command_runner.go:130] > # This option is currently deprecated,
	I0531 19:21:25.266456   51569 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0531 19:21:25.266467   51569 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0531 19:21:25.266478   51569 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0531 19:21:25.266492   51569 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0531 19:21:25.266504   51569 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0531 19:21:25.266526   51569 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0531 19:21:25.266538   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.266546   51569 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0531 19:21:25.266557   51569 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0531 19:21:25.266568   51569 command_runner.go:130] > # the cgroup blockio controller.
	I0531 19:21:25.266579   51569 command_runner.go:130] > # blockio_config_file = ""
	I0531 19:21:25.266591   51569 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0531 19:21:25.266601   51569 command_runner.go:130] > # blockio parameters.
	I0531 19:21:25.266612   51569 command_runner.go:130] > # blockio_reload = false
	I0531 19:21:25.266625   51569 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0531 19:21:25.266635   51569 command_runner.go:130] > # irqbalance daemon.
	I0531 19:21:25.266646   51569 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0531 19:21:25.266659   51569 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0531 19:21:25.266674   51569 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0531 19:21:25.266688   51569 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0531 19:21:25.266702   51569 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0531 19:21:25.266716   51569 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0531 19:21:25.266726   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.266734   51569 command_runner.go:130] > # rdt_config_file = ""
	I0531 19:21:25.266746   51569 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0531 19:21:25.266756   51569 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0531 19:21:25.266793   51569 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0531 19:21:25.266803   51569 command_runner.go:130] > # separate_pull_cgroup = ""
	I0531 19:21:25.266814   51569 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0531 19:21:25.266829   51569 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0531 19:21:25.266839   51569 command_runner.go:130] > # will be added.
	I0531 19:21:25.266849   51569 command_runner.go:130] > # default_capabilities = [
	I0531 19:21:25.266858   51569 command_runner.go:130] > # 	"CHOWN",
	I0531 19:21:25.266867   51569 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0531 19:21:25.266873   51569 command_runner.go:130] > # 	"FSETID",
	I0531 19:21:25.266884   51569 command_runner.go:130] > # 	"FOWNER",
	I0531 19:21:25.266893   51569 command_runner.go:130] > # 	"SETGID",
	I0531 19:21:25.266900   51569 command_runner.go:130] > # 	"SETUID",
	I0531 19:21:25.266909   51569 command_runner.go:130] > # 	"SETPCAP",
	I0531 19:21:25.266916   51569 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0531 19:21:25.266925   51569 command_runner.go:130] > # 	"KILL",
	I0531 19:21:25.266938   51569 command_runner.go:130] > # ]
	I0531 19:21:25.266954   51569 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0531 19:21:25.266969   51569 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0531 19:21:25.266980   51569 command_runner.go:130] > # add_inheritable_capabilities = false
	I0531 19:21:25.266994   51569 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0531 19:21:25.267007   51569 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:21:25.267014   51569 command_runner.go:130] > default_sysctls = [
	I0531 19:21:25.267025   51569 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0531 19:21:25.267030   51569 command_runner.go:130] > ]
	I0531 19:21:25.267039   51569 command_runner.go:130] > # List of devices on the host that a
	I0531 19:21:25.267052   51569 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0531 19:21:25.267062   51569 command_runner.go:130] > # allowed_devices = [
	I0531 19:21:25.267070   51569 command_runner.go:130] > # 	"/dev/fuse",
	I0531 19:21:25.267077   51569 command_runner.go:130] > # ]
	I0531 19:21:25.267088   51569 command_runner.go:130] > # List of additional devices. specified as
	I0531 19:21:25.267100   51569 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0531 19:21:25.267112   51569 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0531 19:21:25.267126   51569 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:21:25.267136   51569 command_runner.go:130] > # additional_devices = [
	I0531 19:21:25.267146   51569 command_runner.go:130] > # ]
	I0531 19:21:25.267155   51569 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0531 19:21:25.267165   51569 command_runner.go:130] > # cdi_spec_dirs = [
	I0531 19:21:25.267172   51569 command_runner.go:130] > # 	"/etc/cdi",
	I0531 19:21:25.267179   51569 command_runner.go:130] > # 	"/var/run/cdi",
	I0531 19:21:25.267194   51569 command_runner.go:130] > # ]
	I0531 19:21:25.267208   51569 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0531 19:21:25.267221   51569 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0531 19:21:25.267232   51569 command_runner.go:130] > # Defaults to false.
	I0531 19:21:25.267251   51569 command_runner.go:130] > # device_ownership_from_security_context = false
	I0531 19:21:25.267265   51569 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0531 19:21:25.267275   51569 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0531 19:21:25.267285   51569 command_runner.go:130] > # hooks_dir = [
	I0531 19:21:25.267296   51569 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0531 19:21:25.267303   51569 command_runner.go:130] > # ]
	I0531 19:21:25.267321   51569 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0531 19:21:25.267335   51569 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0531 19:21:25.267356   51569 command_runner.go:130] > # its default mounts from the following two files:
	I0531 19:21:25.267372   51569 command_runner.go:130] > #
	I0531 19:21:25.267386   51569 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0531 19:21:25.267400   51569 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0531 19:21:25.267413   51569 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0531 19:21:25.267421   51569 command_runner.go:130] > #
	I0531 19:21:25.267432   51569 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0531 19:21:25.267446   51569 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0531 19:21:25.267460   51569 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0531 19:21:25.267471   51569 command_runner.go:130] > #      only add mounts it finds in this file.
	I0531 19:21:25.267480   51569 command_runner.go:130] > #
	I0531 19:21:25.267487   51569 command_runner.go:130] > # default_mounts_file = ""
	I0531 19:21:25.267499   51569 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0531 19:21:25.267513   51569 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0531 19:21:25.267523   51569 command_runner.go:130] > pids_limit = 1024
	I0531 19:21:25.267533   51569 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0531 19:21:25.267542   51569 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0531 19:21:25.267552   51569 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0531 19:21:25.267564   51569 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0531 19:21:25.267573   51569 command_runner.go:130] > # log_size_max = -1
	I0531 19:21:25.267584   51569 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0531 19:21:25.267595   51569 command_runner.go:130] > # log_to_journald = false
	I0531 19:21:25.267610   51569 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0531 19:21:25.267619   51569 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0531 19:21:25.267631   51569 command_runner.go:130] > # Path to directory for container attach sockets.
	I0531 19:21:25.267642   51569 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0531 19:21:25.267651   51569 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0531 19:21:25.267662   51569 command_runner.go:130] > # bind_mount_prefix = ""
	I0531 19:21:25.267675   51569 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0531 19:21:25.267685   51569 command_runner.go:130] > # read_only = false
	I0531 19:21:25.267698   51569 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0531 19:21:25.267712   51569 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0531 19:21:25.267722   51569 command_runner.go:130] > # live configuration reload.
	I0531 19:21:25.267732   51569 command_runner.go:130] > # log_level = "info"
	I0531 19:21:25.267742   51569 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0531 19:21:25.267753   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.267768   51569 command_runner.go:130] > # log_filter = ""
	I0531 19:21:25.267782   51569 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0531 19:21:25.267796   51569 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0531 19:21:25.267805   51569 command_runner.go:130] > # separated by comma.
	I0531 19:21:25.267818   51569 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0531 19:21:25.267827   51569 command_runner.go:130] > # uid_mappings = ""
	I0531 19:21:25.267840   51569 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0531 19:21:25.267854   51569 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0531 19:21:25.267864   51569 command_runner.go:130] > # separated by comma.
	I0531 19:21:25.267878   51569 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0531 19:21:25.267887   51569 command_runner.go:130] > # gid_mappings = ""
	I0531 19:21:25.267898   51569 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0531 19:21:25.267912   51569 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:21:25.267925   51569 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:21:25.267941   51569 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0531 19:21:25.267951   51569 command_runner.go:130] > # minimum_mappable_uid = -1
	I0531 19:21:25.267964   51569 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0531 19:21:25.267977   51569 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:21:25.267987   51569 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:21:25.268003   51569 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0531 19:21:25.268014   51569 command_runner.go:130] > # minimum_mappable_gid = -1
	I0531 19:21:25.268025   51569 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0531 19:21:25.268038   51569 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0531 19:21:25.268050   51569 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0531 19:21:25.268060   51569 command_runner.go:130] > # ctr_stop_timeout = 30
	I0531 19:21:25.268069   51569 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0531 19:21:25.268082   51569 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0531 19:21:25.268094   51569 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0531 19:21:25.268106   51569 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0531 19:21:25.268116   51569 command_runner.go:130] > drop_infra_ctr = false
	I0531 19:21:25.268129   51569 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0531 19:21:25.268141   51569 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0531 19:21:25.268157   51569 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0531 19:21:25.268168   51569 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0531 19:21:25.268183   51569 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0531 19:21:25.268201   51569 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0531 19:21:25.268220   51569 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0531 19:21:25.268232   51569 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0531 19:21:25.268241   51569 command_runner.go:130] > # shared_cpuset = ""
	I0531 19:21:25.268252   51569 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0531 19:21:25.268264   51569 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0531 19:21:25.268274   51569 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0531 19:21:25.268287   51569 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0531 19:21:25.268297   51569 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0531 19:21:25.268308   51569 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0531 19:21:25.268322   51569 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0531 19:21:25.268331   51569 command_runner.go:130] > # enable_criu_support = false
	I0531 19:21:25.268340   51569 command_runner.go:130] > # Enable/disable the generation of the container,
	I0531 19:21:25.268354   51569 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0531 19:21:25.268364   51569 command_runner.go:130] > # enable_pod_events = false
	I0531 19:21:25.268376   51569 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0531 19:21:25.268389   51569 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0531 19:21:25.268402   51569 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0531 19:21:25.268412   51569 command_runner.go:130] > # default_runtime = "runc"
	I0531 19:21:25.268424   51569 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0531 19:21:25.268436   51569 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0531 19:21:25.268454   51569 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0531 19:21:25.268466   51569 command_runner.go:130] > # creation as a file is not desired either.
	I0531 19:21:25.268483   51569 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0531 19:21:25.268495   51569 command_runner.go:130] > # the hostname is being managed dynamically.
	I0531 19:21:25.268505   51569 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0531 19:21:25.268510   51569 command_runner.go:130] > # ]
	I0531 19:21:25.268524   51569 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0531 19:21:25.268539   51569 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0531 19:21:25.268552   51569 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0531 19:21:25.268564   51569 command_runner.go:130] > # Each entry in the table should follow the format:
	I0531 19:21:25.268572   51569 command_runner.go:130] > #
	I0531 19:21:25.268581   51569 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0531 19:21:25.268592   51569 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0531 19:21:25.268644   51569 command_runner.go:130] > # runtime_type = "oci"
	I0531 19:21:25.268655   51569 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0531 19:21:25.268665   51569 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0531 19:21:25.268681   51569 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0531 19:21:25.268690   51569 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0531 19:21:25.268699   51569 command_runner.go:130] > # monitor_env = []
	I0531 19:21:25.268709   51569 command_runner.go:130] > # privileged_without_host_devices = false
	I0531 19:21:25.268720   51569 command_runner.go:130] > # allowed_annotations = []
	I0531 19:21:25.268733   51569 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0531 19:21:25.268742   51569 command_runner.go:130] > # Where:
	I0531 19:21:25.268751   51569 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0531 19:21:25.268764   51569 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0531 19:21:25.268776   51569 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0531 19:21:25.268789   51569 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0531 19:21:25.268799   51569 command_runner.go:130] > #   in $PATH.
	I0531 19:21:25.268812   51569 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0531 19:21:25.268823   51569 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0531 19:21:25.268834   51569 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0531 19:21:25.268843   51569 command_runner.go:130] > #   state.
	I0531 19:21:25.268854   51569 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0531 19:21:25.268867   51569 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0531 19:21:25.268881   51569 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0531 19:21:25.268894   51569 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0531 19:21:25.268908   51569 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0531 19:21:25.268921   51569 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0531 19:21:25.268931   51569 command_runner.go:130] > #   The currently recognized values are:
	I0531 19:21:25.268942   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0531 19:21:25.268967   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0531 19:21:25.268980   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0531 19:21:25.268994   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0531 19:21:25.269010   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0531 19:21:25.269023   51569 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0531 19:21:25.269038   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0531 19:21:25.269051   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0531 19:21:25.269061   51569 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0531 19:21:25.269075   51569 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0531 19:21:25.269086   51569 command_runner.go:130] > #   deprecated option "conmon".
	I0531 19:21:25.269098   51569 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0531 19:21:25.269109   51569 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0531 19:21:25.269133   51569 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0531 19:21:25.269145   51569 command_runner.go:130] > #   should be moved to the container's cgroup
	I0531 19:21:25.269157   51569 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0531 19:21:25.269169   51569 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0531 19:21:25.269181   51569 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0531 19:21:25.269197   51569 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0531 19:21:25.269205   51569 command_runner.go:130] > #
	I0531 19:21:25.269214   51569 command_runner.go:130] > # Using the seccomp notifier feature:
	I0531 19:21:25.269221   51569 command_runner.go:130] > #
	I0531 19:21:25.269232   51569 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0531 19:21:25.269246   51569 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0531 19:21:25.269254   51569 command_runner.go:130] > #
	I0531 19:21:25.269267   51569 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0531 19:21:25.269281   51569 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0531 19:21:25.269289   51569 command_runner.go:130] > #
	I0531 19:21:25.269299   51569 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0531 19:21:25.269308   51569 command_runner.go:130] > # feature.
	I0531 19:21:25.269315   51569 command_runner.go:130] > #
	I0531 19:21:25.269328   51569 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0531 19:21:25.269342   51569 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0531 19:21:25.269355   51569 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0531 19:21:25.269368   51569 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0531 19:21:25.269382   51569 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0531 19:21:25.269390   51569 command_runner.go:130] > #
	I0531 19:21:25.269402   51569 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0531 19:21:25.269414   51569 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0531 19:21:25.269421   51569 command_runner.go:130] > #
	I0531 19:21:25.269432   51569 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0531 19:21:25.269444   51569 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0531 19:21:25.269452   51569 command_runner.go:130] > #
	I0531 19:21:25.269462   51569 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0531 19:21:25.269476   51569 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0531 19:21:25.269486   51569 command_runner.go:130] > # limitation.
	I0531 19:21:25.269498   51569 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0531 19:21:25.269508   51569 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0531 19:21:25.269517   51569 command_runner.go:130] > runtime_type = "oci"
	I0531 19:21:25.269534   51569 command_runner.go:130] > runtime_root = "/run/runc"
	I0531 19:21:25.269545   51569 command_runner.go:130] > runtime_config_path = ""
	I0531 19:21:25.269556   51569 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0531 19:21:25.269564   51569 command_runner.go:130] > monitor_cgroup = "pod"
	I0531 19:21:25.269573   51569 command_runner.go:130] > monitor_exec_cgroup = ""
	I0531 19:21:25.269580   51569 command_runner.go:130] > monitor_env = [
	I0531 19:21:25.269593   51569 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0531 19:21:25.269601   51569 command_runner.go:130] > ]
	I0531 19:21:25.269610   51569 command_runner.go:130] > privileged_without_host_devices = false
	I0531 19:21:25.269625   51569 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0531 19:21:25.269637   51569 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0531 19:21:25.269651   51569 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0531 19:21:25.269667   51569 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0531 19:21:25.269683   51569 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0531 19:21:25.269696   51569 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0531 19:21:25.269714   51569 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0531 19:21:25.269731   51569 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0531 19:21:25.269743   51569 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0531 19:21:25.269759   51569 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0531 19:21:25.269768   51569 command_runner.go:130] > # Example:
	I0531 19:21:25.269775   51569 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0531 19:21:25.269780   51569 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0531 19:21:25.269788   51569 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0531 19:21:25.269796   51569 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0531 19:21:25.269802   51569 command_runner.go:130] > # cpuset = 0
	I0531 19:21:25.269810   51569 command_runner.go:130] > # cpushares = "0-1"
	I0531 19:21:25.269817   51569 command_runner.go:130] > # Where:
	I0531 19:21:25.269825   51569 command_runner.go:130] > # The workload name is workload-type.
	I0531 19:21:25.269835   51569 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0531 19:21:25.269844   51569 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0531 19:21:25.269858   51569 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0531 19:21:25.269875   51569 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0531 19:21:25.269888   51569 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0531 19:21:25.269899   51569 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0531 19:21:25.269910   51569 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0531 19:21:25.269920   51569 command_runner.go:130] > # Default value is set to true
	I0531 19:21:25.269940   51569 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0531 19:21:25.269952   51569 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0531 19:21:25.269963   51569 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0531 19:21:25.269974   51569 command_runner.go:130] > # Default value is set to 'false'
	I0531 19:21:25.269984   51569 command_runner.go:130] > # disable_hostport_mapping = false
	I0531 19:21:25.269996   51569 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0531 19:21:25.270004   51569 command_runner.go:130] > #
	I0531 19:21:25.270014   51569 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0531 19:21:25.270027   51569 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0531 19:21:25.270041   51569 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0531 19:21:25.270055   51569 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0531 19:21:25.270067   51569 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0531 19:21:25.270076   51569 command_runner.go:130] > [crio.image]
	I0531 19:21:25.270087   51569 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0531 19:21:25.270098   51569 command_runner.go:130] > # default_transport = "docker://"
	I0531 19:21:25.270110   51569 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0531 19:21:25.270123   51569 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:21:25.270133   51569 command_runner.go:130] > # global_auth_file = ""
	I0531 19:21:25.270144   51569 command_runner.go:130] > # The image used to instantiate infra containers.
	I0531 19:21:25.270154   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.270165   51569 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0531 19:21:25.270180   51569 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0531 19:21:25.270197   51569 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:21:25.270210   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.270220   51569 command_runner.go:130] > # pause_image_auth_file = ""
	I0531 19:21:25.270232   51569 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0531 19:21:25.270243   51569 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0531 19:21:25.270257   51569 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0531 19:21:25.270277   51569 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0531 19:21:25.270288   51569 command_runner.go:130] > # pause_command = "/pause"
	I0531 19:21:25.270301   51569 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0531 19:21:25.270315   51569 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0531 19:21:25.270328   51569 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0531 19:21:25.270341   51569 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0531 19:21:25.270353   51569 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0531 19:21:25.270367   51569 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0531 19:21:25.270383   51569 command_runner.go:130] > # pinned_images = [
	I0531 19:21:25.270398   51569 command_runner.go:130] > # ]
	I0531 19:21:25.270410   51569 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0531 19:21:25.270425   51569 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0531 19:21:25.270439   51569 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0531 19:21:25.270452   51569 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0531 19:21:25.270464   51569 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0531 19:21:25.270474   51569 command_runner.go:130] > # signature_policy = ""
	I0531 19:21:25.270486   51569 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0531 19:21:25.270498   51569 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0531 19:21:25.270511   51569 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0531 19:21:25.270525   51569 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0531 19:21:25.270538   51569 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0531 19:21:25.270550   51569 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0531 19:21:25.270563   51569 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0531 19:21:25.270576   51569 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0531 19:21:25.270584   51569 command_runner.go:130] > # changing them here.
	I0531 19:21:25.270594   51569 command_runner.go:130] > # insecure_registries = [
	I0531 19:21:25.270603   51569 command_runner.go:130] > # ]
	I0531 19:21:25.270614   51569 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0531 19:21:25.270625   51569 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0531 19:21:25.270636   51569 command_runner.go:130] > # image_volumes = "mkdir"
	I0531 19:21:25.270648   51569 command_runner.go:130] > # Temporary directory to use for storing big files
	I0531 19:21:25.270658   51569 command_runner.go:130] > # big_files_temporary_dir = ""
	I0531 19:21:25.270669   51569 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0531 19:21:25.270679   51569 command_runner.go:130] > # CNI plugins.
	I0531 19:21:25.270688   51569 command_runner.go:130] > [crio.network]
	I0531 19:21:25.270698   51569 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0531 19:21:25.270710   51569 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0531 19:21:25.270721   51569 command_runner.go:130] > # cni_default_network = ""
	I0531 19:21:25.270731   51569 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0531 19:21:25.270742   51569 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0531 19:21:25.270755   51569 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0531 19:21:25.270765   51569 command_runner.go:130] > # plugin_dirs = [
	I0531 19:21:25.270773   51569 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0531 19:21:25.270779   51569 command_runner.go:130] > # ]
	I0531 19:21:25.270797   51569 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0531 19:21:25.270806   51569 command_runner.go:130] > [crio.metrics]
	I0531 19:21:25.270815   51569 command_runner.go:130] > # Globally enable or disable metrics support.
	I0531 19:21:25.270824   51569 command_runner.go:130] > enable_metrics = true
	I0531 19:21:25.270832   51569 command_runner.go:130] > # Specify enabled metrics collectors.
	I0531 19:21:25.270843   51569 command_runner.go:130] > # Per default all metrics are enabled.
	I0531 19:21:25.270856   51569 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0531 19:21:25.270874   51569 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0531 19:21:25.270886   51569 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0531 19:21:25.270896   51569 command_runner.go:130] > # metrics_collectors = [
	I0531 19:21:25.270905   51569 command_runner.go:130] > # 	"operations",
	I0531 19:21:25.270914   51569 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0531 19:21:25.270922   51569 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0531 19:21:25.270932   51569 command_runner.go:130] > # 	"operations_errors",
	I0531 19:21:25.270940   51569 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0531 19:21:25.270948   51569 command_runner.go:130] > # 	"image_pulls_by_name",
	I0531 19:21:25.270958   51569 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0531 19:21:25.270966   51569 command_runner.go:130] > # 	"image_pulls_failures",
	I0531 19:21:25.270974   51569 command_runner.go:130] > # 	"image_pulls_successes",
	I0531 19:21:25.270984   51569 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0531 19:21:25.270991   51569 command_runner.go:130] > # 	"image_layer_reuse",
	I0531 19:21:25.271001   51569 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0531 19:21:25.271008   51569 command_runner.go:130] > # 	"containers_oom_total",
	I0531 19:21:25.271019   51569 command_runner.go:130] > # 	"containers_oom",
	I0531 19:21:25.271026   51569 command_runner.go:130] > # 	"processes_defunct",
	I0531 19:21:25.271034   51569 command_runner.go:130] > # 	"operations_total",
	I0531 19:21:25.271042   51569 command_runner.go:130] > # 	"operations_latency_seconds",
	I0531 19:21:25.271052   51569 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0531 19:21:25.271063   51569 command_runner.go:130] > # 	"operations_errors_total",
	I0531 19:21:25.271073   51569 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0531 19:21:25.271081   51569 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0531 19:21:25.271090   51569 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0531 19:21:25.271097   51569 command_runner.go:130] > # 	"image_pulls_success_total",
	I0531 19:21:25.271107   51569 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0531 19:21:25.271116   51569 command_runner.go:130] > # 	"containers_oom_count_total",
	I0531 19:21:25.271127   51569 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0531 19:21:25.271148   51569 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0531 19:21:25.271157   51569 command_runner.go:130] > # ]
	I0531 19:21:25.271167   51569 command_runner.go:130] > # The port on which the metrics server will listen.
	I0531 19:21:25.271176   51569 command_runner.go:130] > # metrics_port = 9090
	I0531 19:21:25.271185   51569 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0531 19:21:25.271199   51569 command_runner.go:130] > # metrics_socket = ""
	I0531 19:21:25.271210   51569 command_runner.go:130] > # The certificate for the secure metrics server.
	I0531 19:21:25.271223   51569 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0531 19:21:25.271237   51569 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0531 19:21:25.271248   51569 command_runner.go:130] > # certificate on any modification event.
	I0531 19:21:25.271258   51569 command_runner.go:130] > # metrics_cert = ""
	I0531 19:21:25.271268   51569 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0531 19:21:25.271280   51569 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0531 19:21:25.271290   51569 command_runner.go:130] > # metrics_key = ""
	I0531 19:21:25.271301   51569 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0531 19:21:25.271311   51569 command_runner.go:130] > [crio.tracing]
	I0531 19:21:25.271322   51569 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0531 19:21:25.271332   51569 command_runner.go:130] > # enable_tracing = false
	I0531 19:21:25.271345   51569 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0531 19:21:25.271356   51569 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0531 19:21:25.271370   51569 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0531 19:21:25.271380   51569 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0531 19:21:25.271391   51569 command_runner.go:130] > # CRI-O NRI configuration.
	I0531 19:21:25.271400   51569 command_runner.go:130] > [crio.nri]
	I0531 19:21:25.271408   51569 command_runner.go:130] > # Globally enable or disable NRI.
	I0531 19:21:25.271418   51569 command_runner.go:130] > # enable_nri = false
	I0531 19:21:25.271427   51569 command_runner.go:130] > # NRI socket to listen on.
	I0531 19:21:25.271435   51569 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0531 19:21:25.271446   51569 command_runner.go:130] > # NRI plugin directory to use.
	I0531 19:21:25.271457   51569 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0531 19:21:25.271468   51569 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0531 19:21:25.271480   51569 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0531 19:21:25.271492   51569 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0531 19:21:25.271503   51569 command_runner.go:130] > # nri_disable_connections = false
	I0531 19:21:25.271512   51569 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0531 19:21:25.271523   51569 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0531 19:21:25.271538   51569 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0531 19:21:25.271549   51569 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0531 19:21:25.271562   51569 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0531 19:21:25.271571   51569 command_runner.go:130] > [crio.stats]
	I0531 19:21:25.271582   51569 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0531 19:21:25.271593   51569 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0531 19:21:25.271601   51569 command_runner.go:130] > # stats_collection_period = 0
	I0531 19:21:25.271648   51569 command_runner.go:130] ! time="2024-05-31 19:21:25.224840099Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0531 19:21:25.271670   51569 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0531 19:21:25.271822   51569 cni.go:84] Creating CNI manager for ""
	I0531 19:21:25.271837   51569 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0531 19:21:25.271853   51569 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 19:21:25.271880   51569 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-988083 NodeName:multinode-988083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:21:25.272055   51569 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-988083"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:21:25.272127   51569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 19:21:25.282412   51569 command_runner.go:130] > kubeadm
	I0531 19:21:25.282425   51569 command_runner.go:130] > kubectl
	I0531 19:21:25.282429   51569 command_runner.go:130] > kubelet
	I0531 19:21:25.282470   51569 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:21:25.282522   51569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:21:25.291767   51569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0531 19:21:25.308411   51569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:21:25.326700   51569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0531 19:21:25.342991   51569 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I0531 19:21:25.346784   51569 command_runner.go:130] > 192.168.39.78	control-plane.minikube.internal
	I0531 19:21:25.346870   51569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:21:25.479741   51569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 19:21:25.494815   51569 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083 for IP: 192.168.39.78
	I0531 19:21:25.494871   51569 certs.go:194] generating shared ca certs ...
	I0531 19:21:25.494895   51569 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:21:25.495054   51569 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 19:21:25.495107   51569 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 19:21:25.495121   51569 certs.go:256] generating profile certs ...
	I0531 19:21:25.495205   51569 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/client.key
	I0531 19:21:25.495272   51569 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.key.825c99a6
	I0531 19:21:25.495334   51569 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.key
	I0531 19:21:25.495348   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 19:21:25.495372   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 19:21:25.495391   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 19:21:25.495407   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 19:21:25.495422   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 19:21:25.495438   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 19:21:25.495453   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 19:21:25.495467   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 19:21:25.495537   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 19:21:25.495581   51569 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 19:21:25.495595   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:21:25.495628   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 19:21:25.495662   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:21:25.495696   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 19:21:25.495752   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:21:25.495794   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.495813   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.495832   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.496415   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:21:25.521768   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:21:25.545150   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:21:25.568155   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:21:25.592254   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0531 19:21:25.615899   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:21:25.642528   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:21:25.667166   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:21:25.692434   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 19:21:25.716455   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 19:21:25.739890   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:21:25.762943   51569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:21:25.779233   51569 ssh_runner.go:195] Run: openssl version
	I0531 19:21:25.784993   51569 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0531 19:21:25.785183   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 19:21:25.795653   51569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.800103   51569 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.800152   51569 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.800199   51569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.805727   51569 command_runner.go:130] > 51391683
	I0531 19:21:25.805892   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 19:21:25.814772   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 19:21:25.825016   51569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.829323   51569 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.829493   51569 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.829528   51569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.834976   51569 command_runner.go:130] > 3ec20f2e
	I0531 19:21:25.835012   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:21:25.844231   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:21:25.855300   51569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.860594   51569 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.860857   51569 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.860917   51569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.866758   51569 command_runner.go:130] > b5213941
	I0531 19:21:25.866825   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:21:25.876992   51569 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:21:25.881663   51569 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:21:25.881689   51569 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0531 19:21:25.881698   51569 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0531 19:21:25.881707   51569 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:21:25.881717   51569 command_runner.go:130] > Access: 2024-05-31 19:15:13.981376064 +0000
	I0531 19:21:25.881728   51569 command_runner.go:130] > Modify: 2024-05-31 19:15:13.981376064 +0000
	I0531 19:21:25.881738   51569 command_runner.go:130] > Change: 2024-05-31 19:15:13.981376064 +0000
	I0531 19:21:25.881746   51569 command_runner.go:130] >  Birth: 2024-05-31 19:15:13.981376064 +0000
	I0531 19:21:25.881797   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 19:21:25.887520   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.887695   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 19:21:25.893098   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.893496   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 19:21:25.898902   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.898981   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 19:21:25.904411   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.904458   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 19:21:25.909979   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.910151   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 19:21:25.915245   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.915580   51569 kubeadm.go:391] StartCluster: {Name:multinode-988083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-988083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:21:25.915776   51569 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:21:25.915814   51569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:21:25.952622   51569 command_runner.go:130] > 021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399
	I0531 19:21:25.952649   51569 command_runner.go:130] > 608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b
	I0531 19:21:25.952655   51569 command_runner.go:130] > 11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82
	I0531 19:21:25.952662   51569 command_runner.go:130] > ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f
	I0531 19:21:25.952667   51569 command_runner.go:130] > 7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6
	I0531 19:21:25.952672   51569 command_runner.go:130] > db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9
	I0531 19:21:25.952677   51569 command_runner.go:130] > 48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357
	I0531 19:21:25.952684   51569 command_runner.go:130] > 5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2
	I0531 19:21:25.952701   51569 cri.go:89] found id: "021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399"
	I0531 19:21:25.952709   51569 cri.go:89] found id: "608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b"
	I0531 19:21:25.952712   51569 cri.go:89] found id: "11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82"
	I0531 19:21:25.952715   51569 cri.go:89] found id: "ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f"
	I0531 19:21:25.952718   51569 cri.go:89] found id: "7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6"
	I0531 19:21:25.952720   51569 cri.go:89] found id: "db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9"
	I0531 19:21:25.952723   51569 cri.go:89] found id: "48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357"
	I0531 19:21:25.952725   51569 cri.go:89] found id: "5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2"
	I0531 19:21:25.952728   51569 cri.go:89] found id: ""
	I0531 19:21:25.952765   51569 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.415060172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717183371415038240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f69dee56-9f44-4dc0-98e9-0a40cfcec242 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.415633226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af921b22-e84a-4c2e-b5c9-6b08466056bd name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.415687261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af921b22-e84a-4c2e-b5c9-6b08466056bd name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.416064822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8113967a40174c0ba93e0f6e6fe15a0ada03d55aee473cc27c11f5d0a106f4e5,PodSandboxId:4c713433e3e576824b85b11b71e81c8d5e954e38fc60a4cc2b506fae906b5fa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717183326146963862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8,PodSandboxId:46f586e0ccd12f62627d2e7c76d431eb43cc37b0312fd43366284791957c9a18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717183292622882441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74,PodSandboxId:2f03e186c05036ee5f4ffa2fcba4997d8dc61721031c08c3e8471466645a8f45,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717183292421827026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565d8eebc79d641fedba16415bb9b8bfcf5862d315630ab715494026050b49b1,PodSandboxId:9c9eee77ba89037d110c97f9a171253f4334bb83ba374a1b85314bd706c629f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717183292367931798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},An
notations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab,PodSandboxId:f826816bfc53c4c0ea656f483a2eb2560df8c555c85ebbe4cfe62dc7e16efdd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717183292401156380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf,PodSandboxId:cb1ef0fda917b83e52872398958d9c9a8719425aebaf07ebbe44d7902f219d10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717183288585163444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d367849347,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4,PodSandboxId:646d4ae505009fade8ed00a3b268f16689551fdbc7736be9334cbc40bb6e9604,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717183288582784955,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cdd68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b
5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c,PodSandboxId:923ca500210d0c98bca08281bc96f6811d9aa5bdce02c4a8e924db5547d5cf70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717183288496488539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd,PodSandboxId:a69738f516151c20ca2dd16ad2866200a0932347c6540f69db55f99a15565e0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717183288500788425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3989aa18c530a2bfba424f3b380bda6c7e4d375a4670dbd853e74d77fa4b7880,PodSandboxId:1119cb1560382af18c19784adef4132c03214ec0d7ac051cf896788295bdc747,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717182990422419001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399,PodSandboxId:18c9eea462560cf0de5d57ccb708ef9d216ea8bdb0019ef1028c23f0520cadc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717182944253690932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b,PodSandboxId:ab3d659db26ecd2d35254a4b904a2fb82d1e53aa2d7f98a948d1e14c59da6787,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717182944202247045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},Annotations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82,PodSandboxId:caa9f33f0891b0be7803d8bbc979534e5fcfd3b8caf3312d95737abacde49b09,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717182942359012308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f,PodSandboxId:73f8d20d8c9264c0cee44a1a483986ecd515ca364bf36d5777854ee68a18ad6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717182937824621877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.kubernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9,PodSandboxId:0320af84cec4304373d413bd7c5f188ff0cc1b6c695de30b18eb579e6d15450f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717182917853426731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d3678493
47,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357,PodSandboxId:d3d9ef37fd07084f07445e721c06549f96280f769541950b95e8223475605698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182917822238797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cd
d68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6,PodSandboxId:52691cee69e6478d6925ba955f0764e4cb32f2b53382694bbdb54db4b402703f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717182917870977228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2,PodSandboxId:8923e5648e4dcc97133540cc544201c3addc8e341425c5dbe95e65211b2001bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182917789244828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af921b22-e84a-4c2e-b5c9-6b08466056bd name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.458375640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6ee8ab58-eea9-45c7-bc0d-58fb6f1a0a9d name=/runtime.v1.RuntimeService/Version
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.458449845Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6ee8ab58-eea9-45c7-bc0d-58fb6f1a0a9d name=/runtime.v1.RuntimeService/Version
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.459577684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52b70c22-8557-4da4-89ae-a6cd2a8d0980 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.460029886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717183371460004415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52b70c22-8557-4da4-89ae-a6cd2a8d0980 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.460605231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bf1f31d7-7643-4674-9dca-1ec46894ae78 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.460659640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bf1f31d7-7643-4674-9dca-1ec46894ae78 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.461101931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8113967a40174c0ba93e0f6e6fe15a0ada03d55aee473cc27c11f5d0a106f4e5,PodSandboxId:4c713433e3e576824b85b11b71e81c8d5e954e38fc60a4cc2b506fae906b5fa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717183326146963862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8,PodSandboxId:46f586e0ccd12f62627d2e7c76d431eb43cc37b0312fd43366284791957c9a18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717183292622882441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74,PodSandboxId:2f03e186c05036ee5f4ffa2fcba4997d8dc61721031c08c3e8471466645a8f45,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717183292421827026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565d8eebc79d641fedba16415bb9b8bfcf5862d315630ab715494026050b49b1,PodSandboxId:9c9eee77ba89037d110c97f9a171253f4334bb83ba374a1b85314bd706c629f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717183292367931798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},An
notations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab,PodSandboxId:f826816bfc53c4c0ea656f483a2eb2560df8c555c85ebbe4cfe62dc7e16efdd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717183292401156380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf,PodSandboxId:cb1ef0fda917b83e52872398958d9c9a8719425aebaf07ebbe44d7902f219d10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717183288585163444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d367849347,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4,PodSandboxId:646d4ae505009fade8ed00a3b268f16689551fdbc7736be9334cbc40bb6e9604,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717183288582784955,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cdd68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b
5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c,PodSandboxId:923ca500210d0c98bca08281bc96f6811d9aa5bdce02c4a8e924db5547d5cf70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717183288496488539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd,PodSandboxId:a69738f516151c20ca2dd16ad2866200a0932347c6540f69db55f99a15565e0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717183288500788425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3989aa18c530a2bfba424f3b380bda6c7e4d375a4670dbd853e74d77fa4b7880,PodSandboxId:1119cb1560382af18c19784adef4132c03214ec0d7ac051cf896788295bdc747,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717182990422419001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399,PodSandboxId:18c9eea462560cf0de5d57ccb708ef9d216ea8bdb0019ef1028c23f0520cadc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717182944253690932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b,PodSandboxId:ab3d659db26ecd2d35254a4b904a2fb82d1e53aa2d7f98a948d1e14c59da6787,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717182944202247045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},Annotations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82,PodSandboxId:caa9f33f0891b0be7803d8bbc979534e5fcfd3b8caf3312d95737abacde49b09,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717182942359012308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f,PodSandboxId:73f8d20d8c9264c0cee44a1a483986ecd515ca364bf36d5777854ee68a18ad6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717182937824621877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.kubernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9,PodSandboxId:0320af84cec4304373d413bd7c5f188ff0cc1b6c695de30b18eb579e6d15450f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717182917853426731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d3678493
47,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357,PodSandboxId:d3d9ef37fd07084f07445e721c06549f96280f769541950b95e8223475605698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182917822238797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cd
d68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6,PodSandboxId:52691cee69e6478d6925ba955f0764e4cb32f2b53382694bbdb54db4b402703f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717182917870977228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2,PodSandboxId:8923e5648e4dcc97133540cc544201c3addc8e341425c5dbe95e65211b2001bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182917789244828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bf1f31d7-7643-4674-9dca-1ec46894ae78 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.502858433Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e2a507a1-9510-4837-9177-32cbd31de895 name=/runtime.v1.RuntimeService/Version
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.502941664Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e2a507a1-9510-4837-9177-32cbd31de895 name=/runtime.v1.RuntimeService/Version
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.504418521Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6eda060-0c25-43d0-9b35-034490625002 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.504991519Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717183371504964443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6eda060-0c25-43d0-9b35-034490625002 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.505634821Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c990f0d-2b86-43ea-934c-819c1eaf0132 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.505863352Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c990f0d-2b86-43ea-934c-819c1eaf0132 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.506251863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8113967a40174c0ba93e0f6e6fe15a0ada03d55aee473cc27c11f5d0a106f4e5,PodSandboxId:4c713433e3e576824b85b11b71e81c8d5e954e38fc60a4cc2b506fae906b5fa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717183326146963862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8,PodSandboxId:46f586e0ccd12f62627d2e7c76d431eb43cc37b0312fd43366284791957c9a18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717183292622882441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74,PodSandboxId:2f03e186c05036ee5f4ffa2fcba4997d8dc61721031c08c3e8471466645a8f45,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717183292421827026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565d8eebc79d641fedba16415bb9b8bfcf5862d315630ab715494026050b49b1,PodSandboxId:9c9eee77ba89037d110c97f9a171253f4334bb83ba374a1b85314bd706c629f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717183292367931798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},An
notations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab,PodSandboxId:f826816bfc53c4c0ea656f483a2eb2560df8c555c85ebbe4cfe62dc7e16efdd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717183292401156380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf,PodSandboxId:cb1ef0fda917b83e52872398958d9c9a8719425aebaf07ebbe44d7902f219d10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717183288585163444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d367849347,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4,PodSandboxId:646d4ae505009fade8ed00a3b268f16689551fdbc7736be9334cbc40bb6e9604,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717183288582784955,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cdd68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b
5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c,PodSandboxId:923ca500210d0c98bca08281bc96f6811d9aa5bdce02c4a8e924db5547d5cf70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717183288496488539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd,PodSandboxId:a69738f516151c20ca2dd16ad2866200a0932347c6540f69db55f99a15565e0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717183288500788425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3989aa18c530a2bfba424f3b380bda6c7e4d375a4670dbd853e74d77fa4b7880,PodSandboxId:1119cb1560382af18c19784adef4132c03214ec0d7ac051cf896788295bdc747,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717182990422419001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399,PodSandboxId:18c9eea462560cf0de5d57ccb708ef9d216ea8bdb0019ef1028c23f0520cadc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717182944253690932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b,PodSandboxId:ab3d659db26ecd2d35254a4b904a2fb82d1e53aa2d7f98a948d1e14c59da6787,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717182944202247045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},Annotations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82,PodSandboxId:caa9f33f0891b0be7803d8bbc979534e5fcfd3b8caf3312d95737abacde49b09,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717182942359012308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f,PodSandboxId:73f8d20d8c9264c0cee44a1a483986ecd515ca364bf36d5777854ee68a18ad6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717182937824621877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.kubernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9,PodSandboxId:0320af84cec4304373d413bd7c5f188ff0cc1b6c695de30b18eb579e6d15450f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717182917853426731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d3678493
47,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357,PodSandboxId:d3d9ef37fd07084f07445e721c06549f96280f769541950b95e8223475605698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182917822238797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cd
d68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6,PodSandboxId:52691cee69e6478d6925ba955f0764e4cb32f2b53382694bbdb54db4b402703f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717182917870977228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2,PodSandboxId:8923e5648e4dcc97133540cc544201c3addc8e341425c5dbe95e65211b2001bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182917789244828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c990f0d-2b86-43ea-934c-819c1eaf0132 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.548830929Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=917bd584-926b-4d54-9cdc-d0ddad3cb443 name=/runtime.v1.RuntimeService/Version
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.548913440Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=917bd584-926b-4d54-9cdc-d0ddad3cb443 name=/runtime.v1.RuntimeService/Version
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.549990054Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=230bab9a-e852-4861-a58b-03a133f8bdf7 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.550613076Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717183371550587946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=230bab9a-e852-4861-a58b-03a133f8bdf7 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.551028967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d017418d-1cb5-43de-9c47-cdcbc7e0ecd1 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.551083139Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d017418d-1cb5-43de-9c47-cdcbc7e0ecd1 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:22:51 multinode-988083 crio[2885]: time="2024-05-31 19:22:51.551465949Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8113967a40174c0ba93e0f6e6fe15a0ada03d55aee473cc27c11f5d0a106f4e5,PodSandboxId:4c713433e3e576824b85b11b71e81c8d5e954e38fc60a4cc2b506fae906b5fa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717183326146963862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8,PodSandboxId:46f586e0ccd12f62627d2e7c76d431eb43cc37b0312fd43366284791957c9a18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717183292622882441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74,PodSandboxId:2f03e186c05036ee5f4ffa2fcba4997d8dc61721031c08c3e8471466645a8f45,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717183292421827026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565d8eebc79d641fedba16415bb9b8bfcf5862d315630ab715494026050b49b1,PodSandboxId:9c9eee77ba89037d110c97f9a171253f4334bb83ba374a1b85314bd706c629f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717183292367931798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},An
notations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab,PodSandboxId:f826816bfc53c4c0ea656f483a2eb2560df8c555c85ebbe4cfe62dc7e16efdd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717183292401156380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf,PodSandboxId:cb1ef0fda917b83e52872398958d9c9a8719425aebaf07ebbe44d7902f219d10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717183288585163444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d367849347,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4,PodSandboxId:646d4ae505009fade8ed00a3b268f16689551fdbc7736be9334cbc40bb6e9604,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717183288582784955,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cdd68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b
5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c,PodSandboxId:923ca500210d0c98bca08281bc96f6811d9aa5bdce02c4a8e924db5547d5cf70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717183288496488539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd,PodSandboxId:a69738f516151c20ca2dd16ad2866200a0932347c6540f69db55f99a15565e0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717183288500788425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3989aa18c530a2bfba424f3b380bda6c7e4d375a4670dbd853e74d77fa4b7880,PodSandboxId:1119cb1560382af18c19784adef4132c03214ec0d7ac051cf896788295bdc747,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717182990422419001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399,PodSandboxId:18c9eea462560cf0de5d57ccb708ef9d216ea8bdb0019ef1028c23f0520cadc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717182944253690932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b,PodSandboxId:ab3d659db26ecd2d35254a4b904a2fb82d1e53aa2d7f98a948d1e14c59da6787,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717182944202247045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},Annotations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82,PodSandboxId:caa9f33f0891b0be7803d8bbc979534e5fcfd3b8caf3312d95737abacde49b09,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717182942359012308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f,PodSandboxId:73f8d20d8c9264c0cee44a1a483986ecd515ca364bf36d5777854ee68a18ad6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717182937824621877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.kubernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9,PodSandboxId:0320af84cec4304373d413bd7c5f188ff0cc1b6c695de30b18eb579e6d15450f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717182917853426731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d3678493
47,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357,PodSandboxId:d3d9ef37fd07084f07445e721c06549f96280f769541950b95e8223475605698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182917822238797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cd
d68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6,PodSandboxId:52691cee69e6478d6925ba955f0764e4cb32f2b53382694bbdb54db4b402703f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717182917870977228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2,PodSandboxId:8923e5648e4dcc97133540cc544201c3addc8e341425c5dbe95e65211b2001bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182917789244828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d017418d-1cb5-43de-9c47-cdcbc7e0ecd1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8113967a40174       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      45 seconds ago       Running             busybox                   1                   4c713433e3e57       busybox-fc5497c4f-t6fcj
	bcbe244eb4305       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      About a minute ago   Running             kindnet-cni               1                   46f586e0ccd12       kindnet-zcptz
	374c544c21751       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   1                   2f03e186c0503       coredns-7db6d8ff4d-gn5s6
	35ccdd242695a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      About a minute ago   Running             kube-proxy                1                   f826816bfc53c       kube-proxy-8cnnb
	565d8eebc79d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   9c9eee77ba890       storage-provisioner
	7eba17d8c1a7a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   cb1ef0fda917b       etcd-multinode-988083
	771ce65646831       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      About a minute ago   Running             kube-controller-manager   1                   646d4ae505009       kube-controller-manager-multinode-988083
	139bda4e9c6e8       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      About a minute ago   Running             kube-scheduler            1                   a69738f516151       kube-scheduler-multinode-988083
	1c238706b8ea8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      About a minute ago   Running             kube-apiserver            1                   923ca500210d0       kube-apiserver-multinode-988083
	3989aa18c530a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   1119cb1560382       busybox-fc5497c4f-t6fcj
	021a7951eea76       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago        Exited              coredns                   0                   18c9eea462560       coredns-7db6d8ff4d-gn5s6
	608df439d70ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   ab3d659db26ec       storage-provisioner
	11fcd8b1bb42b       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    7 minutes ago        Exited              kindnet-cni               0                   caa9f33f0891b       kindnet-zcptz
	ff4fdd6109937       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      7 minutes ago        Exited              kube-proxy                0                   73f8d20d8c926       kube-proxy-8cnnb
	7e3b7dd02f327       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      7 minutes ago        Exited              kube-scheduler            0                   52691cee69e64       kube-scheduler-multinode-988083
	db1fde1550887       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago        Exited              etcd                      0                   0320af84cec43       etcd-multinode-988083
	48c2219950201       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      7 minutes ago        Exited              kube-controller-manager   0                   d3d9ef37fd070       kube-controller-manager-multinode-988083
	5180205e9c241       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      7 minutes ago        Exited              kube-apiserver            0                   8923e5648e4dc       kube-apiserver-multinode-988083
	
	
	==> coredns [021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399] <==
	[INFO] 10.244.1.2:43548 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706273s
	[INFO] 10.244.1.2:60115 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109261s
	[INFO] 10.244.1.2:60707 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079029s
	[INFO] 10.244.1.2:36626 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001273064s
	[INFO] 10.244.1.2:52390 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225703s
	[INFO] 10.244.1.2:56106 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092886s
	[INFO] 10.244.1.2:56334 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091338s
	[INFO] 10.244.0.3:45555 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115436s
	[INFO] 10.244.0.3:56080 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000031809s
	[INFO] 10.244.0.3:55687 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108909s
	[INFO] 10.244.0.3:35835 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102581s
	[INFO] 10.244.1.2:51436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132426s
	[INFO] 10.244.1.2:52123 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151267s
	[INFO] 10.244.1.2:46390 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166264s
	[INFO] 10.244.1.2:55230 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092441s
	[INFO] 10.244.0.3:35015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254081s
	[INFO] 10.244.0.3:46384 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009303s
	[INFO] 10.244.0.3:49629 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079659s
	[INFO] 10.244.0.3:45631 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196091s
	[INFO] 10.244.1.2:55118 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011302s
	[INFO] 10.244.1.2:46076 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100271s
	[INFO] 10.244.1.2:32855 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071938s
	[INFO] 10.244.1.2:46235 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000105924s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38299 - 63203 "HINFO IN 2236214529243632812.2977748041558225387. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.092639958s
	
	
	==> describe nodes <==
	Name:               multinode-988083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-988083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=multinode-988083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_31T19_15_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 19:15:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-988083
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:22:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 19:21:31 +0000   Fri, 31 May 2024 19:15:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 19:21:31 +0000   Fri, 31 May 2024 19:15:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 19:21:31 +0000   Fri, 31 May 2024 19:15:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 19:21:31 +0000   Fri, 31 May 2024 19:15:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-988083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1f0d3ea92bfc420b9fe05a28ebe0054f
	  System UUID:                1f0d3ea9-2bfc-420b-9fe0-5a28ebe0054f
	  Boot ID:                    b045e951-cb60-4f76-8208-2f943082b070
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t6fcj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m24s
	  kube-system                 coredns-7db6d8ff4d-gn5s6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m14s
	  kube-system                 etcd-multinode-988083                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m29s
	  kube-system                 kindnet-zcptz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m14s
	  kube-system                 kube-apiserver-multinode-988083             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-controller-manager-multinode-988083    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 kube-proxy-8cnnb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m14s
	  kube-system                 kube-scheduler-multinode-988083             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m28s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m13s              kube-proxy       
	  Normal  Starting                 79s                kube-proxy       
	  Normal  Starting                 7m29s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     7m28s              kubelet          Node multinode-988083 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m28s              kubelet          Node multinode-988083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m28s              kubelet          Node multinode-988083 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  7m28s              kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m15s              node-controller  Node multinode-988083 event: Registered Node multinode-988083 in Controller
	  Normal  NodeReady                7m8s               kubelet          Node multinode-988083 status is now: NodeReady
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node multinode-988083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node multinode-988083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)  kubelet          Node multinode-988083 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           67s                node-controller  Node multinode-988083 event: Registered Node multinode-988083 in Controller
	
	
	Name:               multinode-988083-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-988083-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=multinode-988083
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T19_22_12_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 19:22:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-988083-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:22:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 19:22:42 +0000   Fri, 31 May 2024 19:22:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 19:22:42 +0000   Fri, 31 May 2024 19:22:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 19:22:42 +0000   Fri, 31 May 2024 19:22:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 19:22:42 +0000   Fri, 31 May 2024 19:22:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    multinode-988083-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2259ed64316412d89f8d1d302d9cbe5
	  System UUID:                d2259ed6-4316-412d-89f8-d1d302d9cbe5
	  Boot ID:                    79876d25-b44a-4fc8-8ee8-83f2b763e7a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-blqgt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kindnet-xx6dr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m36s
	  kube-system                 kube-proxy-9htg9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m31s                  kube-proxy  
	  Normal  Starting                 36s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m37s (x2 over 6m37s)  kubelet     Node multinode-988083-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x2 over 6m37s)  kubelet     Node multinode-988083-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x2 over 6m37s)  kubelet     Node multinode-988083-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m26s                  kubelet     Node multinode-988083-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)      kubelet     Node multinode-988083-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)      kubelet     Node multinode-988083-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)      kubelet     Node multinode-988083-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                31s                    kubelet     Node multinode-988083-m02 status is now: NodeReady
	
	
	Name:               multinode-988083-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-988083-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=multinode-988083
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T19_22_40_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 19:22:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-988083-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:22:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 19:22:48 +0000   Fri, 31 May 2024 19:22:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 19:22:48 +0000   Fri, 31 May 2024 19:22:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 19:22:48 +0000   Fri, 31 May 2024 19:22:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 19:22:48 +0000   Fri, 31 May 2024 19:22:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.207
	  Hostname:    multinode-988083-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 72edd5f99a0c4b12a9d5417d86a46797
	  System UUID:                72edd5f9-9a0c-4b12-a9d5-417d86a46797
	  Boot ID:                    2b79f9db-e67b-4af6-be1c-63e258ca387f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2c6qt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m52s
	  kube-system                 kube-proxy-2v9gf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m9s                   kube-proxy       
	  Normal  Starting                 5m47s                  kube-proxy       
	  Normal  Starting                 7s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  5m53s (x2 over 5m53s)  kubelet          Node multinode-988083-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m53s (x2 over 5m53s)  kubelet          Node multinode-988083-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m53s (x2 over 5m53s)  kubelet          Node multinode-988083-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m42s                  kubelet          Node multinode-988083-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m13s (x2 over 5m13s)  kubelet          Node multinode-988083-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m13s (x2 over 5m13s)  kubelet          Node multinode-988083-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m13s (x2 over 5m13s)  kubelet          Node multinode-988083-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m4s                   kubelet          Node multinode-988083-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x2 over 12s)      kubelet          Node multinode-988083-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x2 over 12s)      kubelet          Node multinode-988083-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x2 over 12s)      kubelet          Node multinode-988083-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7s                     node-controller  Node multinode-988083-m03 event: Registered Node multinode-988083-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-988083-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.065541] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059287] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.187155] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.111178] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.259616] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.118457] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.718279] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.062283] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.983017] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.074714] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.890375] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.764667] systemd-fstab-generator[1484]: Ignoring "noauto" option for root device
	[  +5.648919] kauditd_printk_skb: 56 callbacks suppressed
	[May31 19:16] kauditd_printk_skb: 16 callbacks suppressed
	[May31 19:21] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +0.153744] systemd-fstab-generator[2815]: Ignoring "noauto" option for root device
	[  +0.182260] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.140102] systemd-fstab-generator[2842]: Ignoring "noauto" option for root device
	[  +0.268583] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +0.687886] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +2.226217] systemd-fstab-generator[3095]: Ignoring "noauto" option for root device
	[  +4.685331] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.603475] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.509237] systemd-fstab-generator[3902]: Ignoring "noauto" option for root device
	[May31 19:22] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf] <==
	{"level":"info","ts":"2024-05-31T19:21:28.926107Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","added-peer-id":"83fde65c75733ea3","added-peer-peer-urls":["https://192.168.39.78:2380"]}
	{"level":"info","ts":"2024-05-31T19:21:28.926272Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:21:28.927193Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:21:28.939194Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:21:28.940127Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:21:28.940173Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:21:28.975554Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-31T19:21:28.975866Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"83fde65c75733ea3","initial-advertise-peer-urls":["https://192.168.39.78:2380"],"listen-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.78:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-31T19:21:28.975944Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-31T19:21:28.976071Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-31T19:21:28.976096Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-31T19:21:30.283996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-31T19:21:30.284096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-31T19:21:30.28416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 2"}
	{"level":"info","ts":"2024-05-31T19:21:30.284197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became candidate at term 3"}
	{"level":"info","ts":"2024-05-31T19:21:30.284226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgVoteResp from 83fde65c75733ea3 at term 3"}
	{"level":"info","ts":"2024-05-31T19:21:30.284252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became leader at term 3"}
	{"level":"info","ts":"2024-05-31T19:21:30.284282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83fde65c75733ea3 elected leader 83fde65c75733ea3 at term 3"}
	{"level":"info","ts":"2024-05-31T19:21:30.288783Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"83fde65c75733ea3","local-member-attributes":"{Name:multinode-988083 ClientURLs:[https://192.168.39.78:2379]}","request-path":"/0/members/83fde65c75733ea3/attributes","cluster-id":"254f9db842b1870b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-31T19:21:30.288995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-31T19:21:30.289027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-31T19:21:30.289553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-31T19:21:30.2896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-31T19:21:30.291395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.78:2379"}
	{"level":"info","ts":"2024-05-31T19:21:30.291492Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9] <==
	{"level":"warn","ts":"2024-05-31T19:15:20.927689Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-31T19:15:20.506585Z","time spent":"421.092394ms","remote":"127.0.0.1:35232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":0,"response size":27,"request content":"key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" "}
	{"level":"warn","ts":"2024-05-31T19:15:20.927841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.997769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-05-31T19:15:20.927879Z","caller":"traceutil/trace.go:171","msg":"trace[1517232406] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:17; }","duration":"154.05085ms","start":"2024-05-31T19:15:20.773823Z","end":"2024-05-31T19:15:20.927874Z","steps":["trace[1517232406] 'agreement among raft nodes before linearized reading'  (duration: 154.006292ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:16:15.157085Z","caller":"traceutil/trace.go:171","msg":"trace[10928778] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"219.182497ms","start":"2024-05-31T19:16:14.937865Z","end":"2024-05-31T19:16:15.157048Z","steps":["trace[10928778] 'process raft request'  (duration: 144.97588ms)","trace[10928778] 'compare'  (duration: 74.095272ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-31T19:16:15.157828Z","caller":"traceutil/trace.go:171","msg":"trace[812641342] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"183.982685ms","start":"2024-05-31T19:16:14.973835Z","end":"2024-05-31T19:16:15.157818Z","steps":["trace[812641342] 'process raft request'  (duration: 183.680901ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-31T19:16:17.603573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.157621ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4513609375405738155 > lease_revoke:<id:3ea38fd013ee882f>","response":"size:28"}
	{"level":"info","ts":"2024-05-31T19:16:17.603715Z","caller":"traceutil/trace.go:171","msg":"trace[492931041] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"199.187142ms","start":"2024-05-31T19:16:17.404512Z","end":"2024-05-31T19:16:17.603699Z","steps":["trace[492931041] 'read index received'  (duration: 15.44467ms)","trace[492931041] 'applied index is now lower than readState.Index'  (duration: 183.740726ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-31T19:16:17.603836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.292998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-988083-m02\" ","response":"range_response_count:1 size:3273"}
	{"level":"info","ts":"2024-05-31T19:16:17.603891Z","caller":"traceutil/trace.go:171","msg":"trace[984424241] range","detail":"{range_begin:/registry/minions/multinode-988083-m02; range_end:; response_count:1; response_revision:479; }","duration":"199.400521ms","start":"2024-05-31T19:16:17.404479Z","end":"2024-05-31T19:16:17.60388Z","steps":["trace[984424241] 'agreement among raft nodes before linearized reading'  (duration: 199.265106ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:16:59.095581Z","caller":"traceutil/trace.go:171","msg":"trace[1955100436] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"184.424604ms","start":"2024-05-31T19:16:58.911143Z","end":"2024-05-31T19:16:59.095568Z","steps":["trace[1955100436] 'process raft request'  (duration: 184.391065ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:16:59.095815Z","caller":"traceutil/trace.go:171","msg":"trace[1033148884] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"255.185052ms","start":"2024-05-31T19:16:58.840614Z","end":"2024-05-31T19:16:59.095799Z","steps":["trace[1033148884] 'process raft request'  (duration: 220.547209ms)","trace[1033148884] 'compare'  (duration: 34.291818ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-31T19:16:59.095948Z","caller":"traceutil/trace.go:171","msg":"trace[293603268] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"248.510828ms","start":"2024-05-31T19:16:58.847431Z","end":"2024-05-31T19:16:59.095942Z","steps":["trace[293603268] 'read index received'  (duration: 213.739613ms)","trace[293603268] 'applied index is now lower than readState.Index'  (duration: 34.770676ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-31T19:16:59.096061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.617354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-988083-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-31T19:16:59.096098Z","caller":"traceutil/trace.go:171","msg":"trace[37434597] range","detail":"{range_begin:/registry/minions/multinode-988083-m03; range_end:; response_count:1; response_revision:574; }","duration":"248.694831ms","start":"2024-05-31T19:16:58.847397Z","end":"2024-05-31T19:16:59.096092Z","steps":["trace[37434597] 'agreement among raft nodes before linearized reading'  (duration: 248.584034ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:17:41.776669Z","caller":"traceutil/trace.go:171","msg":"trace[536818303] transaction","detail":"{read_only:false; response_revision:686; number_of_response:1; }","duration":"116.65299ms","start":"2024-05-31T19:17:41.659998Z","end":"2024-05-31T19:17:41.776651Z","steps":["trace[536818303] 'process raft request'  (duration: 116.468685ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:19:52.789278Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-31T19:19:52.789527Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-988083","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	{"level":"warn","ts":"2024-05-31T19:19:52.789636Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:19:52.789768Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:19:52.882417Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:19:52.882545Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-31T19:19:52.882618Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"83fde65c75733ea3","current-leader-member-id":"83fde65c75733ea3"}
	{"level":"info","ts":"2024-05-31T19:19:52.884839Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-31T19:19:52.884985Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-31T19:19:52.88502Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-988083","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> kernel <==
	 19:22:52 up 8 min,  0 users,  load average: 0.17, 0.33, 0.22
	Linux multinode-988083 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82] <==
	I0531 19:19:03.376895       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:19:13.386237       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:19:13.386368       1 main.go:227] handling current node
	I0531 19:19:13.386427       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:19:13.386452       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:19:13.386577       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:19:13.386599       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:19:23.401979       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:19:23.402019       1 main.go:227] handling current node
	I0531 19:19:23.402030       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:19:23.402035       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:19:23.402172       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:19:23.402198       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:19:33.414903       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:19:33.414983       1 main.go:227] handling current node
	I0531 19:19:33.415090       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:19:33.415117       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:19:33.415247       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:19:33.415267       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:19:43.450434       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:19:43.450538       1 main.go:227] handling current node
	I0531 19:19:43.450567       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:19:43.450585       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:19:43.450730       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:19:43.450751       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8] <==
	I0531 19:22:03.374345       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:22:13.379369       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:22:13.379401       1 main.go:227] handling current node
	I0531 19:22:13.379412       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:22:13.379416       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:22:13.379525       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:22:13.379529       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:22:23.392099       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:22:23.392145       1 main.go:227] handling current node
	I0531 19:22:23.392159       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:22:23.392167       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:22:23.392360       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:22:23.392390       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:22:33.454730       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:22:33.454859       1 main.go:227] handling current node
	I0531 19:22:33.454871       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:22:33.454900       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:22:33.455000       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:22:33.455020       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:22:43.467922       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:22:43.467956       1 main.go:227] handling current node
	I0531 19:22:43.467966       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:22:43.467970       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:22:43.468160       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:22:43.468184       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c] <==
	I0531 19:21:31.516413       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0531 19:21:31.575829       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0531 19:21:31.576990       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0531 19:21:31.577049       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0531 19:21:31.577206       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0531 19:21:31.577256       1 aggregator.go:165] initial CRD sync complete...
	I0531 19:21:31.577263       1 autoregister_controller.go:141] Starting autoregister controller
	I0531 19:21:31.577267       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0531 19:21:31.578732       1 shared_informer.go:320] Caches are synced for configmaps
	I0531 19:21:31.586763       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:21:31.652350       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0531 19:21:31.660005       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0531 19:21:31.660040       1 policy_source.go:224] refreshing policies
	I0531 19:21:31.675338       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 19:21:31.675681       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:21:31.680330       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0531 19:21:31.683607       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:21:32.493233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:21:33.822432       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0531 19:21:33.947592       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0531 19:21:33.959457       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0531 19:21:34.020967       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:21:34.026026       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 19:21:43.845432       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 19:21:43.877807       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2] <==
	I0531 19:19:52.823948       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0531 19:19:52.823959       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0531 19:19:52.823991       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0531 19:19:52.824032       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0531 19:19:52.824058       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0531 19:19:52.824237       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0531 19:19:52.824427       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0531 19:19:52.825604       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0531 19:19:52.825698       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	E0531 19:19:52.826132       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0531 19:19:52.826232       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0531 19:19:52.826388       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0531 19:19:52.826478       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0531 19:19:52.826553       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0531 19:19:52.826645       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.826688       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828095       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828210       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828645       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828770       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828880       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828995       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.829121       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.829241       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0531 19:19:52.832262       1 controller.go:157] Shutting down quota evaluator
	
	
	==> kube-controller-manager [48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357] <==
	I0531 19:16:15.159762       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m02\" does not exist"
	I0531 19:16:15.224390       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m02" podCIDRs=["10.244.1.0/24"]
	I0531 19:16:16.522992       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-988083-m02"
	I0531 19:16:25.247671       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:16:27.468267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.985787ms"
	I0531 19:16:27.480547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.719584ms"
	I0531 19:16:27.480975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.58µs"
	I0531 19:16:27.481105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.051µs"
	I0531 19:16:27.489814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.16µs"
	I0531 19:16:31.345910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.996712ms"
	I0531 19:16:31.351051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.084452ms"
	I0531 19:16:31.351267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.182µs"
	I0531 19:16:59.098860       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:16:59.098977       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m03\" does not exist"
	I0531 19:16:59.112099       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m03" podCIDRs=["10.244.2.0/24"]
	I0531 19:17:01.544777       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-988083-m03"
	I0531 19:17:09.249119       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:17:37.368257       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:17:38.464763       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:17:38.464822       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m03\" does not exist"
	I0531 19:17:38.483785       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m03" podCIDRs=["10.244.3.0/24"]
	I0531 19:17:47.187647       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:18:31.598872       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m03"
	I0531 19:18:31.657562       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.538203ms"
	I0531 19:18:31.657702       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.684µs"
	
	
	==> kube-controller-manager [771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4] <==
	I0531 19:21:44.135189       1 shared_informer.go:320] Caches are synced for attach detach
	I0531 19:21:44.524248       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 19:21:44.572260       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 19:21:44.572349       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0531 19:22:07.583647       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.821569ms"
	I0531 19:22:07.602368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.648319ms"
	I0531 19:22:07.602474       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.379µs"
	I0531 19:22:11.858053       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m02\" does not exist"
	I0531 19:22:11.870129       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m02" podCIDRs=["10.244.1.0/24"]
	I0531 19:22:12.767909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.998µs"
	I0531 19:22:12.781039       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.042µs"
	I0531 19:22:12.814224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="126.207µs"
	I0531 19:22:12.821469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="183.859µs"
	I0531 19:22:12.828970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.134µs"
	I0531 19:22:14.698327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.501µs"
	I0531 19:22:20.113903       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:22:20.133464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.019µs"
	I0531 19:22:20.147378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.551µs"
	I0531 19:22:24.102583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.988515ms"
	I0531 19:22:24.103263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.546µs"
	I0531 19:22:38.415221       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:22:39.693410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:22:39.693459       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m03\" does not exist"
	I0531 19:22:39.702445       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m03" podCIDRs=["10.244.2.0/24"]
	I0531 19:22:48.722859       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	
	
	==> kube-proxy [35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab] <==
	I0531 19:21:32.683675       1 server_linux.go:69] "Using iptables proxy"
	I0531 19:21:32.701811       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.78"]
	I0531 19:21:32.811283       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0531 19:21:32.811443       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0531 19:21:32.811482       1 server_linux.go:165] "Using iptables Proxier"
	I0531 19:21:32.817570       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:21:32.817830       1 server.go:872] "Version info" version="v1.30.1"
	I0531 19:21:32.817862       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:21:32.819179       1 config.go:192] "Starting service config controller"
	I0531 19:21:32.819232       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0531 19:21:32.819260       1 config.go:101] "Starting endpoint slice config controller"
	I0531 19:21:32.819282       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0531 19:21:32.826005       1 config.go:319] "Starting node config controller"
	I0531 19:21:32.826047       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0531 19:21:32.920451       1 shared_informer.go:320] Caches are synced for service config
	I0531 19:21:32.925765       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0531 19:21:32.926125       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f] <==
	I0531 19:15:38.227526       1 server_linux.go:69] "Using iptables proxy"
	I0531 19:15:38.242932       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.78"]
	I0531 19:15:38.336613       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0531 19:15:38.336654       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0531 19:15:38.336682       1 server_linux.go:165] "Using iptables Proxier"
	I0531 19:15:38.339959       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:15:38.340183       1 server.go:872] "Version info" version="v1.30.1"
	I0531 19:15:38.340196       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:15:38.341913       1 config.go:192] "Starting service config controller"
	I0531 19:15:38.341940       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0531 19:15:38.341967       1 config.go:101] "Starting endpoint slice config controller"
	I0531 19:15:38.341971       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0531 19:15:38.342785       1 config.go:319] "Starting node config controller"
	I0531 19:15:38.342792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0531 19:15:38.442419       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0531 19:15:38.442495       1 shared_informer.go:320] Caches are synced for service config
	I0531 19:15:38.444381       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd] <==
	I0531 19:21:29.652867       1 serving.go:380] Generated self-signed cert in-memory
	W0531 19:21:31.536076       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 19:21:31.537183       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 19:21:31.537251       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 19:21:31.537279       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 19:21:31.577804       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0531 19:21:31.578788       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:21:31.583390       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0531 19:21:31.584407       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:21:31.587955       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 19:21:31.584433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 19:21:31.688747       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6] <==
	E0531 19:15:20.416926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:15:20.416973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:15:20.416981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 19:15:20.417055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:15:20.417085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 19:15:20.417185       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 19:15:20.417214       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 19:15:21.289999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 19:15:21.290032       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:15:21.401710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:15:21.401834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 19:15:21.434360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:15:21.434465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 19:15:21.496821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:15:21.497062       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 19:15:21.514099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:15:21.514175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 19:15:21.558217       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:15:21.558421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 19:15:21.565720       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 19:15:21.565990       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 19:15:21.567222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:15:21.567376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0531 19:15:23.997193       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0531 19:19:52.797201       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 31 19:21:28 multinode-988083 kubelet[3102]: E0531 19:21:28.560859    3102 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.78:8443: connect: connection refused" node="multinode-988083"
	May 31 19:21:29 multinode-988083 kubelet[3102]: I0531 19:21:29.363662    3102 kubelet_node_status.go:73] "Attempting to register node" node="multinode-988083"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.734953    3102 kubelet_node_status.go:112] "Node was previously registered" node="multinode-988083"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.735064    3102 kubelet_node_status.go:76] "Successfully registered node" node="multinode-988083"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.736816    3102 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.737768    3102 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.831256    3102 apiserver.go:52] "Watching apiserver"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.834549    3102 topology_manager.go:215] "Topology Admit Handler" podUID="b40f7eb3-69bd-4fed-b782-6ddec667356d" podNamespace="kube-system" podName="kindnet-zcptz"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.834698    3102 topology_manager.go:215] "Topology Admit Handler" podUID="45d80625-5a0f-4e15-96fd-dbecc3b1dcc9" podNamespace="kube-system" podName="kube-proxy-8cnnb"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.834770    3102 topology_manager.go:215] "Topology Admit Handler" podUID="d1c95c7e-5056-4166-a4e5-64843b6c0d1f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gn5s6"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.834842    3102 topology_manager.go:215] "Topology Admit Handler" podUID="ee986eb4-5a08-4aee-9fbd-1180c6b8d145" podNamespace="kube-system" podName="storage-provisioner"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.834901    3102 topology_manager.go:215] "Topology Admit Handler" podUID="222bc4e6-48ac-49a5-ac08-029712c2c95b" podNamespace="default" podName="busybox-fc5497c4f-t6fcj"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.850630    3102 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871776    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b40f7eb3-69bd-4fed-b782-6ddec667356d-xtables-lock\") pod \"kindnet-zcptz\" (UID: \"b40f7eb3-69bd-4fed-b782-6ddec667356d\") " pod="kube-system/kindnet-zcptz"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871818    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b40f7eb3-69bd-4fed-b782-6ddec667356d-lib-modules\") pod \"kindnet-zcptz\" (UID: \"b40f7eb3-69bd-4fed-b782-6ddec667356d\") " pod="kube-system/kindnet-zcptz"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871835    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45d80625-5a0f-4e15-96fd-dbecc3b1dcc9-xtables-lock\") pod \"kube-proxy-8cnnb\" (UID: \"45d80625-5a0f-4e15-96fd-dbecc3b1dcc9\") " pod="kube-system/kube-proxy-8cnnb"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871877    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ee986eb4-5a08-4aee-9fbd-1180c6b8d145-tmp\") pod \"storage-provisioner\" (UID: \"ee986eb4-5a08-4aee-9fbd-1180c6b8d145\") " pod="kube-system/storage-provisioner"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871915    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b40f7eb3-69bd-4fed-b782-6ddec667356d-cni-cfg\") pod \"kindnet-zcptz\" (UID: \"b40f7eb3-69bd-4fed-b782-6ddec667356d\") " pod="kube-system/kindnet-zcptz"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871938    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d80625-5a0f-4e15-96fd-dbecc3b1dcc9-lib-modules\") pod \"kube-proxy-8cnnb\" (UID: \"45d80625-5a0f-4e15-96fd-dbecc3b1dcc9\") " pod="kube-system/kube-proxy-8cnnb"
	May 31 19:21:39 multinode-988083 kubelet[3102]: I0531 19:21:39.004855    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 31 19:22:27 multinode-988083 kubelet[3102]: E0531 19:22:27.904717    3102 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 19:22:27 multinode-988083 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 19:22:27 multinode-988083 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 19:22:27 multinode-988083 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 19:22:27 multinode-988083 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:22:51.146362   52590 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18995-11107/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-988083 -n multinode-988083
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-988083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (303.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 stop
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-988083 stop: exit status 82 (2m0.452508638s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-988083-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-988083 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status
E0531 19:24:58.917101   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-988083 status: exit status 3 (18.832484461s)

                                                
                                                
-- stdout --
	multinode-988083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-988083-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:25:14.333216   53228 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.176:22: connect: no route to host
	E0531 19:25:14.333250   53228 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.176:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-988083 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-988083 -n multinode-988083
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-988083 logs -n 25: (1.476335761s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m02:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083:/home/docker/cp-test_multinode-988083-m02_multinode-988083.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n multinode-988083 sudo cat                                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /home/docker/cp-test_multinode-988083-m02_multinode-988083.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m02:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03:/home/docker/cp-test_multinode-988083-m02_multinode-988083-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n multinode-988083-m03 sudo cat                                   | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /home/docker/cp-test_multinode-988083-m02_multinode-988083-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp testdata/cp-test.txt                                                | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1280635764/001/cp-test_multinode-988083-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083:/home/docker/cp-test_multinode-988083-m03_multinode-988083.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n multinode-988083 sudo cat                                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /home/docker/cp-test_multinode-988083-m03_multinode-988083.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt                       | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m02:/home/docker/cp-test_multinode-988083-m03_multinode-988083-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n                                                                 | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | multinode-988083-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-988083 ssh -n multinode-988083-m02 sudo cat                                   | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | /home/docker/cp-test_multinode-988083-m03_multinode-988083-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-988083 node stop m03                                                          | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	| node    | multinode-988083 node start                                                             | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC | 31 May 24 19:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-988083                                                                | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC |                     |
	| stop    | -p multinode-988083                                                                     | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:17 UTC |                     |
	| start   | -p multinode-988083                                                                     | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:19 UTC | 31 May 24 19:22 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-988083                                                                | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:22 UTC |                     |
	| node    | multinode-988083 node delete                                                            | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:22 UTC | 31 May 24 19:22 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-988083 stop                                                                   | multinode-988083 | jenkins | v1.33.1 | 31 May 24 19:22 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 19:19:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:19:51.862236   51569 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:19:51.862479   51569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:19:51.862488   51569 out.go:304] Setting ErrFile to fd 2...
	I0531 19:19:51.862492   51569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:19:51.862711   51569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:19:51.863263   51569 out.go:298] Setting JSON to false
	I0531 19:19:51.864194   51569 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":7338,"bootTime":1717175854,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:19:51.864251   51569 start.go:139] virtualization: kvm guest
	I0531 19:19:51.866502   51569 out.go:177] * [multinode-988083] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:19:51.867987   51569 notify.go:220] Checking for updates...
	I0531 19:19:51.868011   51569 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 19:19:51.869477   51569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:19:51.870715   51569 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 19:19:51.872026   51569 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:19:51.873133   51569 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:19:51.874256   51569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:19:51.875792   51569 config.go:182] Loaded profile config "multinode-988083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:19:51.875881   51569 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 19:19:51.876295   51569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:19:51.876339   51569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:19:51.890868   51569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39075
	I0531 19:19:51.891183   51569 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:19:51.891671   51569 main.go:141] libmachine: Using API Version  1
	I0531 19:19:51.891707   51569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:19:51.892085   51569 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:19:51.892269   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:19:51.927015   51569 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 19:19:51.928133   51569 start.go:297] selected driver: kvm2
	I0531 19:19:51.928149   51569 start.go:901] validating driver "kvm2" against &{Name:multinode-988083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.1 ClusterName:multinode-988083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:19:51.928292   51569 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:19:51.928592   51569 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:19:51.928656   51569 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 19:19:51.943456   51569 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 19:19:51.944086   51569 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:19:51.944112   51569 cni.go:84] Creating CNI manager for ""
	I0531 19:19:51.944119   51569 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0531 19:19:51.944162   51569 start.go:340] cluster config:
	{Name:multinode-988083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-988083 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:19:51.944282   51569 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:19:51.946548   51569 out.go:177] * Starting "multinode-988083" primary control-plane node in "multinode-988083" cluster
	I0531 19:19:51.947559   51569 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:19:51.947583   51569 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 19:19:51.947592   51569 cache.go:56] Caching tarball of preloaded images
	I0531 19:19:51.947662   51569 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 19:19:51.947672   51569 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 19:19:51.947773   51569 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/config.json ...
	I0531 19:19:51.947942   51569 start.go:360] acquireMachinesLock for multinode-988083: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 19:19:51.947975   51569 start.go:364] duration metric: took 18.346µs to acquireMachinesLock for "multinode-988083"
	I0531 19:19:51.947988   51569 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:19:51.947995   51569 fix.go:54] fixHost starting: 
	I0531 19:19:51.948234   51569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:19:51.948259   51569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:19:51.961971   51569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43113
	I0531 19:19:51.962540   51569 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:19:51.963163   51569 main.go:141] libmachine: Using API Version  1
	I0531 19:19:51.963196   51569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:19:51.963535   51569 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:19:51.963711   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:19:51.963859   51569 main.go:141] libmachine: (multinode-988083) Calling .GetState
	I0531 19:19:51.965438   51569 fix.go:112] recreateIfNeeded on multinode-988083: state=Running err=<nil>
	W0531 19:19:51.965467   51569 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 19:19:51.967215   51569 out.go:177] * Updating the running kvm2 "multinode-988083" VM ...
	I0531 19:19:51.968306   51569 machine.go:94] provisionDockerMachine start ...
	I0531 19:19:51.968332   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:19:51.968498   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:51.971365   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:51.971876   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:51.971906   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:51.972091   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:51.972251   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:51.972428   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:51.972556   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:51.972708   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:19:51.972918   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:19:51.972930   51569 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 19:19:52.074296   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-988083
	
	I0531 19:19:52.074337   51569 main.go:141] libmachine: (multinode-988083) Calling .GetMachineName
	I0531 19:19:52.074579   51569 buildroot.go:166] provisioning hostname "multinode-988083"
	I0531 19:19:52.074607   51569 main.go:141] libmachine: (multinode-988083) Calling .GetMachineName
	I0531 19:19:52.074813   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.077512   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.077869   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.077899   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.078106   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:52.078283   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.078424   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.078557   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:52.078721   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:19:52.078911   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:19:52.078927   51569 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-988083 && echo "multinode-988083" | sudo tee /etc/hostname
	I0531 19:19:52.210803   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-988083
	
	I0531 19:19:52.210833   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.213438   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.213778   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.213808   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.214015   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:52.214197   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.214322   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.214434   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:52.214545   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:19:52.214749   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:19:52.214782   51569 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-988083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-988083/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-988083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:19:52.317810   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:19:52.317837   51569 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 19:19:52.317861   51569 buildroot.go:174] setting up certificates
	I0531 19:19:52.317872   51569 provision.go:84] configureAuth start
	I0531 19:19:52.317880   51569 main.go:141] libmachine: (multinode-988083) Calling .GetMachineName
	I0531 19:19:52.318122   51569 main.go:141] libmachine: (multinode-988083) Calling .GetIP
	I0531 19:19:52.320330   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.320665   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.320694   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.320823   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.322754   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.323060   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.323084   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.323225   51569 provision.go:143] copyHostCerts
	I0531 19:19:52.323250   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:19:52.323288   51569 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 19:19:52.323300   51569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:19:52.323366   51569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 19:19:52.323434   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:19:52.323451   51569 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 19:19:52.323458   51569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:19:52.323482   51569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 19:19:52.323520   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:19:52.323540   51569 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 19:19:52.323546   51569 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:19:52.323568   51569 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 19:19:52.323610   51569 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.multinode-988083 san=[127.0.0.1 192.168.39.78 localhost minikube multinode-988083]
	I0531 19:19:52.519258   51569 provision.go:177] copyRemoteCerts
	I0531 19:19:52.519319   51569 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:19:52.519338   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.521780   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.522137   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.522153   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.522344   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:52.522524   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.522693   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:52.522816   51569 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:19:52.600304   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0531 19:19:52.600377   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:19:52.625245   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0531 19:19:52.625319   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 19:19:52.650633   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0531 19:19:52.650708   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0531 19:19:52.677213   51569 provision.go:87] duration metric: took 359.330813ms to configureAuth
	I0531 19:19:52.677239   51569 buildroot.go:189] setting minikube options for container-runtime
	I0531 19:19:52.677480   51569 config.go:182] Loaded profile config "multinode-988083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:19:52.677570   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:19:52.679874   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.680284   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:19:52.680308   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:19:52.680432   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:19:52.680630   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.680811   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:19:52.680989   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:19:52.681152   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:19:52.681338   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:19:52.681354   51569 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:21:23.353031   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:21:23.353056   51569 machine.go:97] duration metric: took 1m31.384732212s to provisionDockerMachine
	I0531 19:21:23.353078   51569 start.go:293] postStartSetup for "multinode-988083" (driver="kvm2")
	I0531 19:21:23.353088   51569 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:21:23.353103   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.353456   51569 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:21:23.353492   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:21:23.356439   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.356825   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.356845   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.357037   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:21:23.357222   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.357379   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:21:23.357519   51569 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:21:23.436575   51569 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:21:23.441027   51569 command_runner.go:130] > NAME=Buildroot
	I0531 19:21:23.441042   51569 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0531 19:21:23.441046   51569 command_runner.go:130] > ID=buildroot
	I0531 19:21:23.441050   51569 command_runner.go:130] > VERSION_ID=2023.02.9
	I0531 19:21:23.441056   51569 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0531 19:21:23.441144   51569 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 19:21:23.441162   51569 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 19:21:23.441230   51569 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 19:21:23.441322   51569 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 19:21:23.441334   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /etc/ssl/certs/183722.pem
	I0531 19:21:23.441458   51569 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:21:23.451091   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:21:23.476593   51569 start.go:296] duration metric: took 123.500193ms for postStartSetup
	I0531 19:21:23.476635   51569 fix.go:56] duration metric: took 1m31.52863973s for fixHost
	I0531 19:21:23.476654   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:21:23.479394   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.479698   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.479727   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.479858   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:21:23.480120   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.480318   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.480471   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:21:23.480661   51569 main.go:141] libmachine: Using SSH client type: native
	I0531 19:21:23.480850   51569 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.39.78 22 <nil> <nil>}
	I0531 19:21:23.480863   51569 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 19:21:23.578468   51569 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717183283.555209718
	
	I0531 19:21:23.578492   51569 fix.go:216] guest clock: 1717183283.555209718
	I0531 19:21:23.578501   51569 fix.go:229] Guest: 2024-05-31 19:21:23.555209718 +0000 UTC Remote: 2024-05-31 19:21:23.476638885 +0000 UTC m=+91.647733565 (delta=78.570833ms)
	I0531 19:21:23.578527   51569 fix.go:200] guest clock delta is within tolerance: 78.570833ms
	I0531 19:21:23.578534   51569 start.go:83] releasing machines lock for "multinode-988083", held for 1m31.630549811s
	I0531 19:21:23.578559   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.578799   51569 main.go:141] libmachine: (multinode-988083) Calling .GetIP
	I0531 19:21:23.581382   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.581688   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.581715   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.581790   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.582264   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.582407   51569 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:21:23.582490   51569 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:21:23.582537   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:21:23.582672   51569 ssh_runner.go:195] Run: cat /version.json
	I0531 19:21:23.582696   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:21:23.585342   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.585372   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.585746   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.585774   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.585802   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:23.585822   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:23.585974   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:21:23.586051   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:21:23.586128   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.586194   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:21:23.586262   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:21:23.586403   51569 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:21:23.586467   51569 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:21:23.586636   51569 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:21:23.658232   51569 command_runner.go:130] > {"iso_version": "v1.33.1-1716398070-18934", "kicbase_version": "v0.0.44-1716228441-18934", "minikube_version": "v1.33.1", "commit": "7bc64cce06153f72c1bf9cbcf2114663ad5af3b7"}
	I0531 19:21:23.658372   51569 ssh_runner.go:195] Run: systemctl --version
	I0531 19:21:23.683943   51569 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0531 19:21:23.684730   51569 command_runner.go:130] > systemd 252 (252)
	I0531 19:21:23.684772   51569 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0531 19:21:23.684837   51569 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:21:23.843965   51569 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0531 19:21:23.851905   51569 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0531 19:21:23.852343   51569 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 19:21:23.852405   51569 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:21:23.861639   51569 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 19:21:23.861665   51569 start.go:494] detecting cgroup driver to use...
	I0531 19:21:23.861722   51569 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:21:23.877514   51569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:21:23.892194   51569 docker.go:217] disabling cri-docker service (if available) ...
	I0531 19:21:23.892251   51569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:21:23.905850   51569 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:21:23.919428   51569 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:21:24.063081   51569 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:21:24.221907   51569 docker.go:233] disabling docker service ...
	I0531 19:21:24.221988   51569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:21:24.242256   51569 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:21:24.255597   51569 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:21:24.392350   51569 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:21:24.523889   51569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:21:24.538698   51569 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:21:24.557589   51569 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0531 19:21:24.557928   51569 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:21:24.557976   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.568875   51569 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:21:24.568935   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.579964   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.590420   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.600628   51569 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:21:24.611084   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.621115   51569 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.632138   51569 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:21:24.642163   51569 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:21:24.651399   51569 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0531 19:21:24.651471   51569 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:21:24.660416   51569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:21:24.792332   51569 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:21:25.019616   51569 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:21:25.019674   51569 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:21:25.024711   51569 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0531 19:21:25.024734   51569 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0531 19:21:25.024741   51569 command_runner.go:130] > Device: 0,22	Inode: 1347        Links: 1
	I0531 19:21:25.024747   51569 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:21:25.024754   51569 command_runner.go:130] > Access: 2024-05-31 19:21:24.893811928 +0000
	I0531 19:21:25.024759   51569 command_runner.go:130] > Modify: 2024-05-31 19:21:24.893811928 +0000
	I0531 19:21:25.024763   51569 command_runner.go:130] > Change: 2024-05-31 19:21:24.893811928 +0000
	I0531 19:21:25.024767   51569 command_runner.go:130] >  Birth: -
	I0531 19:21:25.024782   51569 start.go:562] Will wait 60s for crictl version
	I0531 19:21:25.024812   51569 ssh_runner.go:195] Run: which crictl
	I0531 19:21:25.028520   51569 command_runner.go:130] > /usr/bin/crictl
	I0531 19:21:25.028638   51569 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:21:25.063450   51569 command_runner.go:130] > Version:  0.1.0
	I0531 19:21:25.063472   51569 command_runner.go:130] > RuntimeName:  cri-o
	I0531 19:21:25.063480   51569 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0531 19:21:25.063488   51569 command_runner.go:130] > RuntimeApiVersion:  v1
	I0531 19:21:25.064566   51569 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 19:21:25.064641   51569 ssh_runner.go:195] Run: crio --version
	I0531 19:21:25.091174   51569 command_runner.go:130] > crio version 1.29.1
	I0531 19:21:25.091192   51569 command_runner.go:130] > Version:        1.29.1
	I0531 19:21:25.091198   51569 command_runner.go:130] > GitCommit:      unknown
	I0531 19:21:25.091203   51569 command_runner.go:130] > GitCommitDate:  unknown
	I0531 19:21:25.091207   51569 command_runner.go:130] > GitTreeState:   clean
	I0531 19:21:25.091227   51569 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0531 19:21:25.091233   51569 command_runner.go:130] > GoVersion:      go1.21.6
	I0531 19:21:25.091239   51569 command_runner.go:130] > Compiler:       gc
	I0531 19:21:25.091247   51569 command_runner.go:130] > Platform:       linux/amd64
	I0531 19:21:25.091254   51569 command_runner.go:130] > Linkmode:       dynamic
	I0531 19:21:25.091264   51569 command_runner.go:130] > BuildTags:      
	I0531 19:21:25.091271   51569 command_runner.go:130] >   containers_image_ostree_stub
	I0531 19:21:25.091276   51569 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0531 19:21:25.091283   51569 command_runner.go:130] >   btrfs_noversion
	I0531 19:21:25.091287   51569 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0531 19:21:25.091291   51569 command_runner.go:130] >   libdm_no_deferred_remove
	I0531 19:21:25.091297   51569 command_runner.go:130] >   seccomp
	I0531 19:21:25.091301   51569 command_runner.go:130] > LDFlags:          unknown
	I0531 19:21:25.091308   51569 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:21:25.091312   51569 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:21:25.092428   51569 ssh_runner.go:195] Run: crio --version
	I0531 19:21:25.118285   51569 command_runner.go:130] > crio version 1.29.1
	I0531 19:21:25.118307   51569 command_runner.go:130] > Version:        1.29.1
	I0531 19:21:25.118314   51569 command_runner.go:130] > GitCommit:      unknown
	I0531 19:21:25.118321   51569 command_runner.go:130] > GitCommitDate:  unknown
	I0531 19:21:25.118327   51569 command_runner.go:130] > GitTreeState:   clean
	I0531 19:21:25.118336   51569 command_runner.go:130] > BuildDate:      2024-05-22T23:02:45Z
	I0531 19:21:25.118342   51569 command_runner.go:130] > GoVersion:      go1.21.6
	I0531 19:21:25.118348   51569 command_runner.go:130] > Compiler:       gc
	I0531 19:21:25.118355   51569 command_runner.go:130] > Platform:       linux/amd64
	I0531 19:21:25.118371   51569 command_runner.go:130] > Linkmode:       dynamic
	I0531 19:21:25.118381   51569 command_runner.go:130] > BuildTags:      
	I0531 19:21:25.118387   51569 command_runner.go:130] >   containers_image_ostree_stub
	I0531 19:21:25.118393   51569 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0531 19:21:25.118403   51569 command_runner.go:130] >   btrfs_noversion
	I0531 19:21:25.118411   51569 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0531 19:21:25.118421   51569 command_runner.go:130] >   libdm_no_deferred_remove
	I0531 19:21:25.118426   51569 command_runner.go:130] >   seccomp
	I0531 19:21:25.118436   51569 command_runner.go:130] > LDFlags:          unknown
	I0531 19:21:25.118442   51569 command_runner.go:130] > SeccompEnabled:   true
	I0531 19:21:25.118451   51569 command_runner.go:130] > AppArmorEnabled:  false
	I0531 19:21:25.122163   51569 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 19:21:25.123675   51569 main.go:141] libmachine: (multinode-988083) Calling .GetIP
	I0531 19:21:25.126244   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:25.126536   51569 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:21:25.126575   51569 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:21:25.126743   51569 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0531 19:21:25.130908   51569 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0531 19:21:25.131082   51569 kubeadm.go:877] updating cluster {Name:multinode-988083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.1 ClusterName:multinode-988083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 19:21:25.131245   51569 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:21:25.131313   51569 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:21:25.177009   51569 command_runner.go:130] > {
	I0531 19:21:25.177032   51569 command_runner.go:130] >   "images": [
	I0531 19:21:25.177038   51569 command_runner.go:130] >     {
	I0531 19:21:25.177048   51569 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0531 19:21:25.177054   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177062   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0531 19:21:25.177067   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177072   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177086   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0531 19:21:25.177099   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0531 19:21:25.177106   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177114   51569 command_runner.go:130] >       "size": "65291810",
	I0531 19:21:25.177121   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177128   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177137   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177148   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177154   51569 command_runner.go:130] >     },
	I0531 19:21:25.177160   51569 command_runner.go:130] >     {
	I0531 19:21:25.177171   51569 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0531 19:21:25.177180   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177189   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0531 19:21:25.177195   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177212   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177228   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0531 19:21:25.177244   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0531 19:21:25.177254   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177262   51569 command_runner.go:130] >       "size": "65908273",
	I0531 19:21:25.177270   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177282   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177293   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177302   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177308   51569 command_runner.go:130] >     },
	I0531 19:21:25.177315   51569 command_runner.go:130] >     {
	I0531 19:21:25.177327   51569 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0531 19:21:25.177336   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177346   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0531 19:21:25.177356   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177365   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177379   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0531 19:21:25.177394   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0531 19:21:25.177403   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177410   51569 command_runner.go:130] >       "size": "1363676",
	I0531 19:21:25.177419   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177427   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177436   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177445   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177454   51569 command_runner.go:130] >     },
	I0531 19:21:25.177460   51569 command_runner.go:130] >     {
	I0531 19:21:25.177474   51569 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0531 19:21:25.177483   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177493   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 19:21:25.177502   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177511   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177525   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0531 19:21:25.177549   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0531 19:21:25.177558   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177565   51569 command_runner.go:130] >       "size": "31470524",
	I0531 19:21:25.177596   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177618   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177628   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177637   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177643   51569 command_runner.go:130] >     },
	I0531 19:21:25.177652   51569 command_runner.go:130] >     {
	I0531 19:21:25.177663   51569 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0531 19:21:25.177673   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177682   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0531 19:21:25.177691   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177699   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177715   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0531 19:21:25.177730   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0531 19:21:25.177740   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177749   51569 command_runner.go:130] >       "size": "61245718",
	I0531 19:21:25.177759   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.177766   51569 command_runner.go:130] >       "username": "nonroot",
	I0531 19:21:25.177776   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177784   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177792   51569 command_runner.go:130] >     },
	I0531 19:21:25.177799   51569 command_runner.go:130] >     {
	I0531 19:21:25.177813   51569 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0531 19:21:25.177823   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.177834   51569 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0531 19:21:25.177843   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177850   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.177865   51569 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0531 19:21:25.177879   51569 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0531 19:21:25.177892   51569 command_runner.go:130] >       ],
	I0531 19:21:25.177905   51569 command_runner.go:130] >       "size": "150779692",
	I0531 19:21:25.177918   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.177925   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.177938   51569 command_runner.go:130] >       },
	I0531 19:21:25.177943   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.177949   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.177955   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.177961   51569 command_runner.go:130] >     },
	I0531 19:21:25.177973   51569 command_runner.go:130] >     {
	I0531 19:21:25.177983   51569 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0531 19:21:25.177991   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178000   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0531 19:21:25.178008   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178012   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178020   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0531 19:21:25.178030   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0531 19:21:25.178035   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178040   51569 command_runner.go:130] >       "size": "117601759",
	I0531 19:21:25.178046   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.178050   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.178055   51569 command_runner.go:130] >       },
	I0531 19:21:25.178060   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178070   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178076   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.178084   51569 command_runner.go:130] >     },
	I0531 19:21:25.178093   51569 command_runner.go:130] >     {
	I0531 19:21:25.178106   51569 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0531 19:21:25.178114   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178125   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0531 19:21:25.178131   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178142   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178173   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0531 19:21:25.178190   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0531 19:21:25.178197   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178204   51569 command_runner.go:130] >       "size": "112170310",
	I0531 19:21:25.178214   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.178223   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.178232   51569 command_runner.go:130] >       },
	I0531 19:21:25.178238   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178248   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178255   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.178260   51569 command_runner.go:130] >     },
	I0531 19:21:25.178266   51569 command_runner.go:130] >     {
	I0531 19:21:25.178275   51569 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0531 19:21:25.178287   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178295   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0531 19:21:25.178305   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178312   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178327   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0531 19:21:25.178340   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0531 19:21:25.178346   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178353   51569 command_runner.go:130] >       "size": "85933465",
	I0531 19:21:25.178363   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.178373   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178379   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178389   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.178398   51569 command_runner.go:130] >     },
	I0531 19:21:25.178406   51569 command_runner.go:130] >     {
	I0531 19:21:25.178419   51569 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0531 19:21:25.178427   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178438   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0531 19:21:25.178447   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178465   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178480   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0531 19:21:25.178496   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0531 19:21:25.178504   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178514   51569 command_runner.go:130] >       "size": "63026504",
	I0531 19:21:25.178523   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.178532   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.178541   51569 command_runner.go:130] >       },
	I0531 19:21:25.178548   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178559   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178569   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.178578   51569 command_runner.go:130] >     },
	I0531 19:21:25.178588   51569 command_runner.go:130] >     {
	I0531 19:21:25.178600   51569 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0531 19:21:25.178610   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.178620   51569 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0531 19:21:25.178628   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178635   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.178651   51569 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0531 19:21:25.178666   51569 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0531 19:21:25.178675   51569 command_runner.go:130] >       ],
	I0531 19:21:25.178690   51569 command_runner.go:130] >       "size": "750414",
	I0531 19:21:25.178699   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.178709   51569 command_runner.go:130] >         "value": "65535"
	I0531 19:21:25.178718   51569 command_runner.go:130] >       },
	I0531 19:21:25.178726   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.178733   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.178737   51569 command_runner.go:130] >       "pinned": true
	I0531 19:21:25.178745   51569 command_runner.go:130] >     }
	I0531 19:21:25.178755   51569 command_runner.go:130] >   ]
	I0531 19:21:25.178760   51569 command_runner.go:130] > }
	I0531 19:21:25.179040   51569 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:21:25.179052   51569 crio.go:433] Images already preloaded, skipping extraction
	I0531 19:21:25.179102   51569 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:21:25.212965   51569 command_runner.go:130] > {
	I0531 19:21:25.212995   51569 command_runner.go:130] >   "images": [
	I0531 19:21:25.213001   51569 command_runner.go:130] >     {
	I0531 19:21:25.213013   51569 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0531 19:21:25.213019   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213025   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0531 19:21:25.213029   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213033   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213040   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0531 19:21:25.213048   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0531 19:21:25.213053   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213060   51569 command_runner.go:130] >       "size": "65291810",
	I0531 19:21:25.213066   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213075   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213089   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213100   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213105   51569 command_runner.go:130] >     },
	I0531 19:21:25.213111   51569 command_runner.go:130] >     {
	I0531 19:21:25.213120   51569 command_runner.go:130] >       "id": "ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f",
	I0531 19:21:25.213129   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213137   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240513-cd2ac642"
	I0531 19:21:25.213141   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213149   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213164   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266",
	I0531 19:21:25.213177   51569 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"
	I0531 19:21:25.213186   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213203   51569 command_runner.go:130] >       "size": "65908273",
	I0531 19:21:25.213213   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213225   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213233   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213237   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213245   51569 command_runner.go:130] >     },
	I0531 19:21:25.213253   51569 command_runner.go:130] >     {
	I0531 19:21:25.213267   51569 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0531 19:21:25.213277   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213288   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0531 19:21:25.213298   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213307   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213322   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0531 19:21:25.213335   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0531 19:21:25.213342   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213348   51569 command_runner.go:130] >       "size": "1363676",
	I0531 19:21:25.213358   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213368   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213375   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213385   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213394   51569 command_runner.go:130] >     },
	I0531 19:21:25.213402   51569 command_runner.go:130] >     {
	I0531 19:21:25.213414   51569 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0531 19:21:25.213424   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213436   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0531 19:21:25.213442   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213446   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213462   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0531 19:21:25.213487   51569 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0531 19:21:25.213498   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213504   51569 command_runner.go:130] >       "size": "31470524",
	I0531 19:21:25.213511   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213520   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213529   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213537   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213542   51569 command_runner.go:130] >     },
	I0531 19:21:25.213554   51569 command_runner.go:130] >     {
	I0531 19:21:25.213568   51569 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0531 19:21:25.213577   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213586   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0531 19:21:25.213595   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213604   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213619   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0531 19:21:25.213634   51569 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0531 19:21:25.213643   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213650   51569 command_runner.go:130] >       "size": "61245718",
	I0531 19:21:25.213654   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.213663   51569 command_runner.go:130] >       "username": "nonroot",
	I0531 19:21:25.213673   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213683   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213692   51569 command_runner.go:130] >     },
	I0531 19:21:25.213701   51569 command_runner.go:130] >     {
	I0531 19:21:25.213714   51569 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0531 19:21:25.213723   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213734   51569 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0531 19:21:25.213749   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213759   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213774   51569 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0531 19:21:25.213788   51569 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0531 19:21:25.213797   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213808   51569 command_runner.go:130] >       "size": "150779692",
	I0531 19:21:25.213817   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.213826   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.213834   51569 command_runner.go:130] >       },
	I0531 19:21:25.213843   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.213852   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.213857   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.213869   51569 command_runner.go:130] >     },
	I0531 19:21:25.213873   51569 command_runner.go:130] >     {
	I0531 19:21:25.213883   51569 command_runner.go:130] >       "id": "91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a",
	I0531 19:21:25.213889   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.213897   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.1"
	I0531 19:21:25.213909   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213916   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.213934   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea",
	I0531 19:21:25.213949   51569 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"
	I0531 19:21:25.213958   51569 command_runner.go:130] >       ],
	I0531 19:21:25.213965   51569 command_runner.go:130] >       "size": "117601759",
	I0531 19:21:25.213974   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.213981   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.213989   51569 command_runner.go:130] >       },
	I0531 19:21:25.213996   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214006   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214012   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.214018   51569 command_runner.go:130] >     },
	I0531 19:21:25.214024   51569 command_runner.go:130] >     {
	I0531 19:21:25.214037   51569 command_runner.go:130] >       "id": "25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c",
	I0531 19:21:25.214046   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.214055   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.1"
	I0531 19:21:25.214062   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214066   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.214088   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52",
	I0531 19:21:25.214098   51569 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"
	I0531 19:21:25.214102   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214106   51569 command_runner.go:130] >       "size": "112170310",
	I0531 19:21:25.214112   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.214116   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.214121   51569 command_runner.go:130] >       },
	I0531 19:21:25.214128   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214134   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214138   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.214146   51569 command_runner.go:130] >     },
	I0531 19:21:25.214149   51569 command_runner.go:130] >     {
	I0531 19:21:25.214155   51569 command_runner.go:130] >       "id": "747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd",
	I0531 19:21:25.214159   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.214164   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.1"
	I0531 19:21:25.214168   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214172   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.214186   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa",
	I0531 19:21:25.214196   51569 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"
	I0531 19:21:25.214200   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214204   51569 command_runner.go:130] >       "size": "85933465",
	I0531 19:21:25.214208   51569 command_runner.go:130] >       "uid": null,
	I0531 19:21:25.214212   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214218   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214222   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.214228   51569 command_runner.go:130] >     },
	I0531 19:21:25.214231   51569 command_runner.go:130] >     {
	I0531 19:21:25.214237   51569 command_runner.go:130] >       "id": "a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035",
	I0531 19:21:25.214244   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.214249   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.1"
	I0531 19:21:25.214254   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214258   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.214265   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036",
	I0531 19:21:25.214273   51569 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"
	I0531 19:21:25.214277   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214282   51569 command_runner.go:130] >       "size": "63026504",
	I0531 19:21:25.214288   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.214291   51569 command_runner.go:130] >         "value": "0"
	I0531 19:21:25.214294   51569 command_runner.go:130] >       },
	I0531 19:21:25.214298   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214302   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214307   51569 command_runner.go:130] >       "pinned": false
	I0531 19:21:25.214311   51569 command_runner.go:130] >     },
	I0531 19:21:25.214316   51569 command_runner.go:130] >     {
	I0531 19:21:25.214322   51569 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0531 19:21:25.214327   51569 command_runner.go:130] >       "repoTags": [
	I0531 19:21:25.214332   51569 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0531 19:21:25.214335   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214339   51569 command_runner.go:130] >       "repoDigests": [
	I0531 19:21:25.214346   51569 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0531 19:21:25.214355   51569 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0531 19:21:25.214358   51569 command_runner.go:130] >       ],
	I0531 19:21:25.214364   51569 command_runner.go:130] >       "size": "750414",
	I0531 19:21:25.214376   51569 command_runner.go:130] >       "uid": {
	I0531 19:21:25.214383   51569 command_runner.go:130] >         "value": "65535"
	I0531 19:21:25.214386   51569 command_runner.go:130] >       },
	I0531 19:21:25.214390   51569 command_runner.go:130] >       "username": "",
	I0531 19:21:25.214394   51569 command_runner.go:130] >       "spec": null,
	I0531 19:21:25.214397   51569 command_runner.go:130] >       "pinned": true
	I0531 19:21:25.214401   51569 command_runner.go:130] >     }
	I0531 19:21:25.214404   51569 command_runner.go:130] >   ]
	I0531 19:21:25.214407   51569 command_runner.go:130] > }
	I0531 19:21:25.214536   51569 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:21:25.214546   51569 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:21:25.214552   51569 kubeadm.go:928] updating node { 192.168.39.78 8443 v1.30.1 crio true true} ...
	I0531 19:21:25.214659   51569 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-988083 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.78
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-988083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 19:21:25.214731   51569 ssh_runner.go:195] Run: crio config
	I0531 19:21:25.256986   51569 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0531 19:21:25.257018   51569 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0531 19:21:25.257028   51569 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0531 19:21:25.257033   51569 command_runner.go:130] > #
	I0531 19:21:25.257042   51569 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0531 19:21:25.257051   51569 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0531 19:21:25.257061   51569 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0531 19:21:25.257071   51569 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0531 19:21:25.257077   51569 command_runner.go:130] > # reload'.
	I0531 19:21:25.257095   51569 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0531 19:21:25.257111   51569 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0531 19:21:25.257121   51569 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0531 19:21:25.257127   51569 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0531 19:21:25.257133   51569 command_runner.go:130] > [crio]
	I0531 19:21:25.257147   51569 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0531 19:21:25.257152   51569 command_runner.go:130] > # containers images, in this directory.
	I0531 19:21:25.257162   51569 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0531 19:21:25.257184   51569 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0531 19:21:25.257337   51569 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0531 19:21:25.257351   51569 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0531 19:21:25.257535   51569 command_runner.go:130] > # imagestore = ""
	I0531 19:21:25.257545   51569 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0531 19:21:25.257551   51569 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0531 19:21:25.257778   51569 command_runner.go:130] > storage_driver = "overlay"
	I0531 19:21:25.257797   51569 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0531 19:21:25.257807   51569 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0531 19:21:25.257818   51569 command_runner.go:130] > storage_option = [
	I0531 19:21:25.257917   51569 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0531 19:21:25.257995   51569 command_runner.go:130] > ]
	I0531 19:21:25.258005   51569 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0531 19:21:25.258011   51569 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0531 19:21:25.258571   51569 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0531 19:21:25.258587   51569 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0531 19:21:25.258597   51569 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0531 19:21:25.258609   51569 command_runner.go:130] > # always happen on a node reboot
	I0531 19:21:25.258808   51569 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0531 19:21:25.258842   51569 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0531 19:21:25.258857   51569 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0531 19:21:25.258869   51569 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0531 19:21:25.259062   51569 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0531 19:21:25.259103   51569 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0531 19:21:25.259139   51569 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0531 19:21:25.259199   51569 command_runner.go:130] > # internal_wipe = true
	I0531 19:21:25.259216   51569 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0531 19:21:25.259226   51569 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0531 19:21:25.259658   51569 command_runner.go:130] > # internal_repair = false
	I0531 19:21:25.259670   51569 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0531 19:21:25.259679   51569 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0531 19:21:25.259688   51569 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0531 19:21:25.260084   51569 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0531 19:21:25.260098   51569 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0531 19:21:25.260104   51569 command_runner.go:130] > [crio.api]
	I0531 19:21:25.260114   51569 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0531 19:21:25.260491   51569 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0531 19:21:25.260503   51569 command_runner.go:130] > # IP address on which the stream server will listen.
	I0531 19:21:25.260791   51569 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0531 19:21:25.260806   51569 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0531 19:21:25.260812   51569 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0531 19:21:25.261121   51569 command_runner.go:130] > # stream_port = "0"
	I0531 19:21:25.261136   51569 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0531 19:21:25.261355   51569 command_runner.go:130] > # stream_enable_tls = false
	I0531 19:21:25.261368   51569 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0531 19:21:25.261607   51569 command_runner.go:130] > # stream_idle_timeout = ""
	I0531 19:21:25.261638   51569 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0531 19:21:25.261650   51569 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0531 19:21:25.261659   51569 command_runner.go:130] > # minutes.
	I0531 19:21:25.261827   51569 command_runner.go:130] > # stream_tls_cert = ""
	I0531 19:21:25.261843   51569 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0531 19:21:25.261852   51569 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0531 19:21:25.262096   51569 command_runner.go:130] > # stream_tls_key = ""
	I0531 19:21:25.262108   51569 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0531 19:21:25.262114   51569 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0531 19:21:25.262135   51569 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0531 19:21:25.262415   51569 command_runner.go:130] > # stream_tls_ca = ""
	I0531 19:21:25.262432   51569 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0531 19:21:25.262568   51569 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0531 19:21:25.262581   51569 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0531 19:21:25.262707   51569 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0531 19:21:25.262720   51569 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0531 19:21:25.262730   51569 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0531 19:21:25.262737   51569 command_runner.go:130] > [crio.runtime]
	I0531 19:21:25.262748   51569 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0531 19:21:25.262762   51569 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0531 19:21:25.262772   51569 command_runner.go:130] > # "nofile=1024:2048"
	I0531 19:21:25.262785   51569 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0531 19:21:25.262852   51569 command_runner.go:130] > # default_ulimits = [
	I0531 19:21:25.263007   51569 command_runner.go:130] > # ]
	I0531 19:21:25.263021   51569 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0531 19:21:25.263455   51569 command_runner.go:130] > # no_pivot = false
	I0531 19:21:25.263467   51569 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0531 19:21:25.263477   51569 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0531 19:21:25.263760   51569 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0531 19:21:25.263772   51569 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0531 19:21:25.263780   51569 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0531 19:21:25.263792   51569 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:21:25.263946   51569 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0531 19:21:25.263956   51569 command_runner.go:130] > # Cgroup setting for conmon
	I0531 19:21:25.263967   51569 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0531 19:21:25.264124   51569 command_runner.go:130] > conmon_cgroup = "pod"
	I0531 19:21:25.264138   51569 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0531 19:21:25.264143   51569 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0531 19:21:25.264151   51569 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0531 19:21:25.264157   51569 command_runner.go:130] > conmon_env = [
	I0531 19:21:25.264329   51569 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0531 19:21:25.264386   51569 command_runner.go:130] > ]
	I0531 19:21:25.264399   51569 command_runner.go:130] > # Additional environment variables to set for all the
	I0531 19:21:25.264411   51569 command_runner.go:130] > # containers. These are overridden if set in the
	I0531 19:21:25.264423   51569 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0531 19:21:25.264530   51569 command_runner.go:130] > # default_env = [
	I0531 19:21:25.264706   51569 command_runner.go:130] > # ]
	I0531 19:21:25.264725   51569 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0531 19:21:25.264738   51569 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0531 19:21:25.266350   51569 command_runner.go:130] > # selinux = false
	I0531 19:21:25.266365   51569 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0531 19:21:25.266374   51569 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0531 19:21:25.266383   51569 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0531 19:21:25.266390   51569 command_runner.go:130] > # seccomp_profile = ""
	I0531 19:21:25.266403   51569 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0531 19:21:25.266413   51569 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0531 19:21:25.266427   51569 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0531 19:21:25.266434   51569 command_runner.go:130] > # which might increase security.
	I0531 19:21:25.266442   51569 command_runner.go:130] > # This option is currently deprecated,
	I0531 19:21:25.266456   51569 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0531 19:21:25.266467   51569 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0531 19:21:25.266478   51569 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0531 19:21:25.266492   51569 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0531 19:21:25.266504   51569 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0531 19:21:25.266526   51569 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0531 19:21:25.266538   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.266546   51569 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0531 19:21:25.266557   51569 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0531 19:21:25.266568   51569 command_runner.go:130] > # the cgroup blockio controller.
	I0531 19:21:25.266579   51569 command_runner.go:130] > # blockio_config_file = ""
	I0531 19:21:25.266591   51569 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0531 19:21:25.266601   51569 command_runner.go:130] > # blockio parameters.
	I0531 19:21:25.266612   51569 command_runner.go:130] > # blockio_reload = false
	I0531 19:21:25.266625   51569 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0531 19:21:25.266635   51569 command_runner.go:130] > # irqbalance daemon.
	I0531 19:21:25.266646   51569 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0531 19:21:25.266659   51569 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0531 19:21:25.266674   51569 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0531 19:21:25.266688   51569 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0531 19:21:25.266702   51569 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0531 19:21:25.266716   51569 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0531 19:21:25.266726   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.266734   51569 command_runner.go:130] > # rdt_config_file = ""
	I0531 19:21:25.266746   51569 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0531 19:21:25.266756   51569 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0531 19:21:25.266793   51569 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0531 19:21:25.266803   51569 command_runner.go:130] > # separate_pull_cgroup = ""
	I0531 19:21:25.266814   51569 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0531 19:21:25.266829   51569 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0531 19:21:25.266839   51569 command_runner.go:130] > # will be added.
	I0531 19:21:25.266849   51569 command_runner.go:130] > # default_capabilities = [
	I0531 19:21:25.266858   51569 command_runner.go:130] > # 	"CHOWN",
	I0531 19:21:25.266867   51569 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0531 19:21:25.266873   51569 command_runner.go:130] > # 	"FSETID",
	I0531 19:21:25.266884   51569 command_runner.go:130] > # 	"FOWNER",
	I0531 19:21:25.266893   51569 command_runner.go:130] > # 	"SETGID",
	I0531 19:21:25.266900   51569 command_runner.go:130] > # 	"SETUID",
	I0531 19:21:25.266909   51569 command_runner.go:130] > # 	"SETPCAP",
	I0531 19:21:25.266916   51569 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0531 19:21:25.266925   51569 command_runner.go:130] > # 	"KILL",
	I0531 19:21:25.266938   51569 command_runner.go:130] > # ]
	I0531 19:21:25.266954   51569 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0531 19:21:25.266969   51569 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0531 19:21:25.266980   51569 command_runner.go:130] > # add_inheritable_capabilities = false
	I0531 19:21:25.266994   51569 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0531 19:21:25.267007   51569 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:21:25.267014   51569 command_runner.go:130] > default_sysctls = [
	I0531 19:21:25.267025   51569 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0531 19:21:25.267030   51569 command_runner.go:130] > ]
	I0531 19:21:25.267039   51569 command_runner.go:130] > # List of devices on the host that a
	I0531 19:21:25.267052   51569 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0531 19:21:25.267062   51569 command_runner.go:130] > # allowed_devices = [
	I0531 19:21:25.267070   51569 command_runner.go:130] > # 	"/dev/fuse",
	I0531 19:21:25.267077   51569 command_runner.go:130] > # ]
	I0531 19:21:25.267088   51569 command_runner.go:130] > # List of additional devices. specified as
	I0531 19:21:25.267100   51569 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0531 19:21:25.267112   51569 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0531 19:21:25.267126   51569 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0531 19:21:25.267136   51569 command_runner.go:130] > # additional_devices = [
	I0531 19:21:25.267146   51569 command_runner.go:130] > # ]
	I0531 19:21:25.267155   51569 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0531 19:21:25.267165   51569 command_runner.go:130] > # cdi_spec_dirs = [
	I0531 19:21:25.267172   51569 command_runner.go:130] > # 	"/etc/cdi",
	I0531 19:21:25.267179   51569 command_runner.go:130] > # 	"/var/run/cdi",
	I0531 19:21:25.267194   51569 command_runner.go:130] > # ]
	I0531 19:21:25.267208   51569 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0531 19:21:25.267221   51569 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0531 19:21:25.267232   51569 command_runner.go:130] > # Defaults to false.
	I0531 19:21:25.267251   51569 command_runner.go:130] > # device_ownership_from_security_context = false
	I0531 19:21:25.267265   51569 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0531 19:21:25.267275   51569 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0531 19:21:25.267285   51569 command_runner.go:130] > # hooks_dir = [
	I0531 19:21:25.267296   51569 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0531 19:21:25.267303   51569 command_runner.go:130] > # ]
	I0531 19:21:25.267321   51569 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0531 19:21:25.267335   51569 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0531 19:21:25.267356   51569 command_runner.go:130] > # its default mounts from the following two files:
	I0531 19:21:25.267372   51569 command_runner.go:130] > #
	I0531 19:21:25.267386   51569 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0531 19:21:25.267400   51569 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0531 19:21:25.267413   51569 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0531 19:21:25.267421   51569 command_runner.go:130] > #
	I0531 19:21:25.267432   51569 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0531 19:21:25.267446   51569 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0531 19:21:25.267460   51569 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0531 19:21:25.267471   51569 command_runner.go:130] > #      only add mounts it finds in this file.
	I0531 19:21:25.267480   51569 command_runner.go:130] > #
	I0531 19:21:25.267487   51569 command_runner.go:130] > # default_mounts_file = ""
	I0531 19:21:25.267499   51569 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0531 19:21:25.267513   51569 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0531 19:21:25.267523   51569 command_runner.go:130] > pids_limit = 1024
	I0531 19:21:25.267533   51569 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0531 19:21:25.267542   51569 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0531 19:21:25.267552   51569 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0531 19:21:25.267564   51569 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0531 19:21:25.267573   51569 command_runner.go:130] > # log_size_max = -1
	I0531 19:21:25.267584   51569 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0531 19:21:25.267595   51569 command_runner.go:130] > # log_to_journald = false
	I0531 19:21:25.267610   51569 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0531 19:21:25.267619   51569 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0531 19:21:25.267631   51569 command_runner.go:130] > # Path to directory for container attach sockets.
	I0531 19:21:25.267642   51569 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0531 19:21:25.267651   51569 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0531 19:21:25.267662   51569 command_runner.go:130] > # bind_mount_prefix = ""
	I0531 19:21:25.267675   51569 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0531 19:21:25.267685   51569 command_runner.go:130] > # read_only = false
	I0531 19:21:25.267698   51569 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0531 19:21:25.267712   51569 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0531 19:21:25.267722   51569 command_runner.go:130] > # live configuration reload.
	I0531 19:21:25.267732   51569 command_runner.go:130] > # log_level = "info"
	I0531 19:21:25.267742   51569 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0531 19:21:25.267753   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.267768   51569 command_runner.go:130] > # log_filter = ""
	I0531 19:21:25.267782   51569 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0531 19:21:25.267796   51569 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0531 19:21:25.267805   51569 command_runner.go:130] > # separated by comma.
	I0531 19:21:25.267818   51569 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0531 19:21:25.267827   51569 command_runner.go:130] > # uid_mappings = ""
	I0531 19:21:25.267840   51569 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0531 19:21:25.267854   51569 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0531 19:21:25.267864   51569 command_runner.go:130] > # separated by comma.
	I0531 19:21:25.267878   51569 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0531 19:21:25.267887   51569 command_runner.go:130] > # gid_mappings = ""
	I0531 19:21:25.267898   51569 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0531 19:21:25.267912   51569 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:21:25.267925   51569 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:21:25.267941   51569 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0531 19:21:25.267951   51569 command_runner.go:130] > # minimum_mappable_uid = -1
	I0531 19:21:25.267964   51569 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0531 19:21:25.267977   51569 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0531 19:21:25.267987   51569 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0531 19:21:25.268003   51569 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0531 19:21:25.268014   51569 command_runner.go:130] > # minimum_mappable_gid = -1
	I0531 19:21:25.268025   51569 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0531 19:21:25.268038   51569 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0531 19:21:25.268050   51569 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0531 19:21:25.268060   51569 command_runner.go:130] > # ctr_stop_timeout = 30
	I0531 19:21:25.268069   51569 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0531 19:21:25.268082   51569 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0531 19:21:25.268094   51569 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0531 19:21:25.268106   51569 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0531 19:21:25.268116   51569 command_runner.go:130] > drop_infra_ctr = false
	I0531 19:21:25.268129   51569 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0531 19:21:25.268141   51569 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0531 19:21:25.268157   51569 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0531 19:21:25.268168   51569 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0531 19:21:25.268183   51569 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0531 19:21:25.268201   51569 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0531 19:21:25.268220   51569 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0531 19:21:25.268232   51569 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0531 19:21:25.268241   51569 command_runner.go:130] > # shared_cpuset = ""
	I0531 19:21:25.268252   51569 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0531 19:21:25.268264   51569 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0531 19:21:25.268274   51569 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0531 19:21:25.268287   51569 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0531 19:21:25.268297   51569 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0531 19:21:25.268308   51569 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0531 19:21:25.268322   51569 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0531 19:21:25.268331   51569 command_runner.go:130] > # enable_criu_support = false
	I0531 19:21:25.268340   51569 command_runner.go:130] > # Enable/disable the generation of the container,
	I0531 19:21:25.268354   51569 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0531 19:21:25.268364   51569 command_runner.go:130] > # enable_pod_events = false
	I0531 19:21:25.268376   51569 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0531 19:21:25.268389   51569 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0531 19:21:25.268402   51569 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0531 19:21:25.268412   51569 command_runner.go:130] > # default_runtime = "runc"
	I0531 19:21:25.268424   51569 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0531 19:21:25.268436   51569 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0531 19:21:25.268454   51569 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0531 19:21:25.268466   51569 command_runner.go:130] > # creation as a file is not desired either.
	I0531 19:21:25.268483   51569 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0531 19:21:25.268495   51569 command_runner.go:130] > # the hostname is being managed dynamically.
	I0531 19:21:25.268505   51569 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0531 19:21:25.268510   51569 command_runner.go:130] > # ]
	I0531 19:21:25.268524   51569 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0531 19:21:25.268539   51569 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0531 19:21:25.268552   51569 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0531 19:21:25.268564   51569 command_runner.go:130] > # Each entry in the table should follow the format:
	I0531 19:21:25.268572   51569 command_runner.go:130] > #
	I0531 19:21:25.268581   51569 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0531 19:21:25.268592   51569 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0531 19:21:25.268644   51569 command_runner.go:130] > # runtime_type = "oci"
	I0531 19:21:25.268655   51569 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0531 19:21:25.268665   51569 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0531 19:21:25.268681   51569 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0531 19:21:25.268690   51569 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0531 19:21:25.268699   51569 command_runner.go:130] > # monitor_env = []
	I0531 19:21:25.268709   51569 command_runner.go:130] > # privileged_without_host_devices = false
	I0531 19:21:25.268720   51569 command_runner.go:130] > # allowed_annotations = []
	I0531 19:21:25.268733   51569 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0531 19:21:25.268742   51569 command_runner.go:130] > # Where:
	I0531 19:21:25.268751   51569 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0531 19:21:25.268764   51569 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0531 19:21:25.268776   51569 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0531 19:21:25.268789   51569 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0531 19:21:25.268799   51569 command_runner.go:130] > #   in $PATH.
	I0531 19:21:25.268812   51569 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0531 19:21:25.268823   51569 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0531 19:21:25.268834   51569 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0531 19:21:25.268843   51569 command_runner.go:130] > #   state.
	I0531 19:21:25.268854   51569 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0531 19:21:25.268867   51569 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0531 19:21:25.268881   51569 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0531 19:21:25.268894   51569 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0531 19:21:25.268908   51569 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0531 19:21:25.268921   51569 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0531 19:21:25.268931   51569 command_runner.go:130] > #   The currently recognized values are:
	I0531 19:21:25.268942   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0531 19:21:25.268967   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0531 19:21:25.268980   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0531 19:21:25.268994   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0531 19:21:25.269010   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0531 19:21:25.269023   51569 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0531 19:21:25.269038   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0531 19:21:25.269051   51569 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0531 19:21:25.269061   51569 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0531 19:21:25.269075   51569 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0531 19:21:25.269086   51569 command_runner.go:130] > #   deprecated option "conmon".
	I0531 19:21:25.269098   51569 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0531 19:21:25.269109   51569 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0531 19:21:25.269133   51569 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0531 19:21:25.269145   51569 command_runner.go:130] > #   should be moved to the container's cgroup
	I0531 19:21:25.269157   51569 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0531 19:21:25.269169   51569 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0531 19:21:25.269181   51569 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0531 19:21:25.269197   51569 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0531 19:21:25.269205   51569 command_runner.go:130] > #
	I0531 19:21:25.269214   51569 command_runner.go:130] > # Using the seccomp notifier feature:
	I0531 19:21:25.269221   51569 command_runner.go:130] > #
	I0531 19:21:25.269232   51569 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0531 19:21:25.269246   51569 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0531 19:21:25.269254   51569 command_runner.go:130] > #
	I0531 19:21:25.269267   51569 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0531 19:21:25.269281   51569 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0531 19:21:25.269289   51569 command_runner.go:130] > #
	I0531 19:21:25.269299   51569 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0531 19:21:25.269308   51569 command_runner.go:130] > # feature.
	I0531 19:21:25.269315   51569 command_runner.go:130] > #
	I0531 19:21:25.269328   51569 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0531 19:21:25.269342   51569 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0531 19:21:25.269355   51569 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0531 19:21:25.269368   51569 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0531 19:21:25.269382   51569 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0531 19:21:25.269390   51569 command_runner.go:130] > #
	I0531 19:21:25.269402   51569 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0531 19:21:25.269414   51569 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0531 19:21:25.269421   51569 command_runner.go:130] > #
	I0531 19:21:25.269432   51569 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0531 19:21:25.269444   51569 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0531 19:21:25.269452   51569 command_runner.go:130] > #
	I0531 19:21:25.269462   51569 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0531 19:21:25.269476   51569 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0531 19:21:25.269486   51569 command_runner.go:130] > # limitation.
	I0531 19:21:25.269498   51569 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0531 19:21:25.269508   51569 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0531 19:21:25.269517   51569 command_runner.go:130] > runtime_type = "oci"
	I0531 19:21:25.269534   51569 command_runner.go:130] > runtime_root = "/run/runc"
	I0531 19:21:25.269545   51569 command_runner.go:130] > runtime_config_path = ""
	I0531 19:21:25.269556   51569 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0531 19:21:25.269564   51569 command_runner.go:130] > monitor_cgroup = "pod"
	I0531 19:21:25.269573   51569 command_runner.go:130] > monitor_exec_cgroup = ""
	I0531 19:21:25.269580   51569 command_runner.go:130] > monitor_env = [
	I0531 19:21:25.269593   51569 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0531 19:21:25.269601   51569 command_runner.go:130] > ]
	I0531 19:21:25.269610   51569 command_runner.go:130] > privileged_without_host_devices = false
	I0531 19:21:25.269625   51569 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0531 19:21:25.269637   51569 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0531 19:21:25.269651   51569 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0531 19:21:25.269667   51569 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0531 19:21:25.269683   51569 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0531 19:21:25.269696   51569 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0531 19:21:25.269714   51569 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0531 19:21:25.269731   51569 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0531 19:21:25.269743   51569 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0531 19:21:25.269759   51569 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0531 19:21:25.269768   51569 command_runner.go:130] > # Example:
	I0531 19:21:25.269775   51569 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0531 19:21:25.269780   51569 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0531 19:21:25.269788   51569 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0531 19:21:25.269796   51569 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0531 19:21:25.269802   51569 command_runner.go:130] > # cpuset = 0
	I0531 19:21:25.269810   51569 command_runner.go:130] > # cpushares = "0-1"
	I0531 19:21:25.269817   51569 command_runner.go:130] > # Where:
	I0531 19:21:25.269825   51569 command_runner.go:130] > # The workload name is workload-type.
	I0531 19:21:25.269835   51569 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0531 19:21:25.269844   51569 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0531 19:21:25.269858   51569 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0531 19:21:25.269875   51569 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0531 19:21:25.269888   51569 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0531 19:21:25.269899   51569 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0531 19:21:25.269910   51569 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0531 19:21:25.269920   51569 command_runner.go:130] > # Default value is set to true
	I0531 19:21:25.269940   51569 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0531 19:21:25.269952   51569 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0531 19:21:25.269963   51569 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0531 19:21:25.269974   51569 command_runner.go:130] > # Default value is set to 'false'
	I0531 19:21:25.269984   51569 command_runner.go:130] > # disable_hostport_mapping = false
	I0531 19:21:25.269996   51569 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0531 19:21:25.270004   51569 command_runner.go:130] > #
	I0531 19:21:25.270014   51569 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0531 19:21:25.270027   51569 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0531 19:21:25.270041   51569 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0531 19:21:25.270055   51569 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0531 19:21:25.270067   51569 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0531 19:21:25.270076   51569 command_runner.go:130] > [crio.image]
	I0531 19:21:25.270087   51569 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0531 19:21:25.270098   51569 command_runner.go:130] > # default_transport = "docker://"
	I0531 19:21:25.270110   51569 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0531 19:21:25.270123   51569 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:21:25.270133   51569 command_runner.go:130] > # global_auth_file = ""
	I0531 19:21:25.270144   51569 command_runner.go:130] > # The image used to instantiate infra containers.
	I0531 19:21:25.270154   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.270165   51569 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0531 19:21:25.270180   51569 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0531 19:21:25.270197   51569 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0531 19:21:25.270210   51569 command_runner.go:130] > # This option supports live configuration reload.
	I0531 19:21:25.270220   51569 command_runner.go:130] > # pause_image_auth_file = ""
	I0531 19:21:25.270232   51569 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0531 19:21:25.270243   51569 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0531 19:21:25.270257   51569 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0531 19:21:25.270277   51569 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0531 19:21:25.270288   51569 command_runner.go:130] > # pause_command = "/pause"
	I0531 19:21:25.270301   51569 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0531 19:21:25.270315   51569 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0531 19:21:25.270328   51569 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0531 19:21:25.270341   51569 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0531 19:21:25.270353   51569 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0531 19:21:25.270367   51569 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0531 19:21:25.270383   51569 command_runner.go:130] > # pinned_images = [
	I0531 19:21:25.270398   51569 command_runner.go:130] > # ]
	I0531 19:21:25.270410   51569 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0531 19:21:25.270425   51569 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0531 19:21:25.270439   51569 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0531 19:21:25.270452   51569 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0531 19:21:25.270464   51569 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0531 19:21:25.270474   51569 command_runner.go:130] > # signature_policy = ""
	I0531 19:21:25.270486   51569 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0531 19:21:25.270498   51569 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0531 19:21:25.270511   51569 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0531 19:21:25.270525   51569 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0531 19:21:25.270538   51569 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0531 19:21:25.270550   51569 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0531 19:21:25.270563   51569 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0531 19:21:25.270576   51569 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0531 19:21:25.270584   51569 command_runner.go:130] > # changing them here.
	I0531 19:21:25.270594   51569 command_runner.go:130] > # insecure_registries = [
	I0531 19:21:25.270603   51569 command_runner.go:130] > # ]
	I0531 19:21:25.270614   51569 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0531 19:21:25.270625   51569 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0531 19:21:25.270636   51569 command_runner.go:130] > # image_volumes = "mkdir"
	I0531 19:21:25.270648   51569 command_runner.go:130] > # Temporary directory to use for storing big files
	I0531 19:21:25.270658   51569 command_runner.go:130] > # big_files_temporary_dir = ""
	I0531 19:21:25.270669   51569 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0531 19:21:25.270679   51569 command_runner.go:130] > # CNI plugins.
	I0531 19:21:25.270688   51569 command_runner.go:130] > [crio.network]
	I0531 19:21:25.270698   51569 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0531 19:21:25.270710   51569 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0531 19:21:25.270721   51569 command_runner.go:130] > # cni_default_network = ""
	I0531 19:21:25.270731   51569 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0531 19:21:25.270742   51569 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0531 19:21:25.270755   51569 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0531 19:21:25.270765   51569 command_runner.go:130] > # plugin_dirs = [
	I0531 19:21:25.270773   51569 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0531 19:21:25.270779   51569 command_runner.go:130] > # ]
	I0531 19:21:25.270797   51569 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0531 19:21:25.270806   51569 command_runner.go:130] > [crio.metrics]
	I0531 19:21:25.270815   51569 command_runner.go:130] > # Globally enable or disable metrics support.
	I0531 19:21:25.270824   51569 command_runner.go:130] > enable_metrics = true
	I0531 19:21:25.270832   51569 command_runner.go:130] > # Specify enabled metrics collectors.
	I0531 19:21:25.270843   51569 command_runner.go:130] > # Per default all metrics are enabled.
	I0531 19:21:25.270856   51569 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0531 19:21:25.270874   51569 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0531 19:21:25.270886   51569 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0531 19:21:25.270896   51569 command_runner.go:130] > # metrics_collectors = [
	I0531 19:21:25.270905   51569 command_runner.go:130] > # 	"operations",
	I0531 19:21:25.270914   51569 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0531 19:21:25.270922   51569 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0531 19:21:25.270932   51569 command_runner.go:130] > # 	"operations_errors",
	I0531 19:21:25.270940   51569 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0531 19:21:25.270948   51569 command_runner.go:130] > # 	"image_pulls_by_name",
	I0531 19:21:25.270958   51569 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0531 19:21:25.270966   51569 command_runner.go:130] > # 	"image_pulls_failures",
	I0531 19:21:25.270974   51569 command_runner.go:130] > # 	"image_pulls_successes",
	I0531 19:21:25.270984   51569 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0531 19:21:25.270991   51569 command_runner.go:130] > # 	"image_layer_reuse",
	I0531 19:21:25.271001   51569 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0531 19:21:25.271008   51569 command_runner.go:130] > # 	"containers_oom_total",
	I0531 19:21:25.271019   51569 command_runner.go:130] > # 	"containers_oom",
	I0531 19:21:25.271026   51569 command_runner.go:130] > # 	"processes_defunct",
	I0531 19:21:25.271034   51569 command_runner.go:130] > # 	"operations_total",
	I0531 19:21:25.271042   51569 command_runner.go:130] > # 	"operations_latency_seconds",
	I0531 19:21:25.271052   51569 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0531 19:21:25.271063   51569 command_runner.go:130] > # 	"operations_errors_total",
	I0531 19:21:25.271073   51569 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0531 19:21:25.271081   51569 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0531 19:21:25.271090   51569 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0531 19:21:25.271097   51569 command_runner.go:130] > # 	"image_pulls_success_total",
	I0531 19:21:25.271107   51569 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0531 19:21:25.271116   51569 command_runner.go:130] > # 	"containers_oom_count_total",
	I0531 19:21:25.271127   51569 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0531 19:21:25.271148   51569 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0531 19:21:25.271157   51569 command_runner.go:130] > # ]
	I0531 19:21:25.271167   51569 command_runner.go:130] > # The port on which the metrics server will listen.
	I0531 19:21:25.271176   51569 command_runner.go:130] > # metrics_port = 9090
	I0531 19:21:25.271185   51569 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0531 19:21:25.271199   51569 command_runner.go:130] > # metrics_socket = ""
	I0531 19:21:25.271210   51569 command_runner.go:130] > # The certificate for the secure metrics server.
	I0531 19:21:25.271223   51569 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0531 19:21:25.271237   51569 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0531 19:21:25.271248   51569 command_runner.go:130] > # certificate on any modification event.
	I0531 19:21:25.271258   51569 command_runner.go:130] > # metrics_cert = ""
	I0531 19:21:25.271268   51569 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0531 19:21:25.271280   51569 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0531 19:21:25.271290   51569 command_runner.go:130] > # metrics_key = ""
	I0531 19:21:25.271301   51569 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0531 19:21:25.271311   51569 command_runner.go:130] > [crio.tracing]
	I0531 19:21:25.271322   51569 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0531 19:21:25.271332   51569 command_runner.go:130] > # enable_tracing = false
	I0531 19:21:25.271345   51569 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0531 19:21:25.271356   51569 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0531 19:21:25.271370   51569 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0531 19:21:25.271380   51569 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0531 19:21:25.271391   51569 command_runner.go:130] > # CRI-O NRI configuration.
	I0531 19:21:25.271400   51569 command_runner.go:130] > [crio.nri]
	I0531 19:21:25.271408   51569 command_runner.go:130] > # Globally enable or disable NRI.
	I0531 19:21:25.271418   51569 command_runner.go:130] > # enable_nri = false
	I0531 19:21:25.271427   51569 command_runner.go:130] > # NRI socket to listen on.
	I0531 19:21:25.271435   51569 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0531 19:21:25.271446   51569 command_runner.go:130] > # NRI plugin directory to use.
	I0531 19:21:25.271457   51569 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0531 19:21:25.271468   51569 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0531 19:21:25.271480   51569 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0531 19:21:25.271492   51569 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0531 19:21:25.271503   51569 command_runner.go:130] > # nri_disable_connections = false
	I0531 19:21:25.271512   51569 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0531 19:21:25.271523   51569 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0531 19:21:25.271538   51569 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0531 19:21:25.271549   51569 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0531 19:21:25.271562   51569 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0531 19:21:25.271571   51569 command_runner.go:130] > [crio.stats]
	I0531 19:21:25.271582   51569 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0531 19:21:25.271593   51569 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0531 19:21:25.271601   51569 command_runner.go:130] > # stats_collection_period = 0
	I0531 19:21:25.271648   51569 command_runner.go:130] ! time="2024-05-31 19:21:25.224840099Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0531 19:21:25.271670   51569 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0531 19:21:25.271822   51569 cni.go:84] Creating CNI manager for ""
	I0531 19:21:25.271837   51569 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0531 19:21:25.271853   51569 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 19:21:25.271880   51569 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.78 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-988083 NodeName:multinode-988083 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.78"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.78 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:21:25.272055   51569 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.78
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-988083"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.78
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.78"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:21:25.272127   51569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 19:21:25.282412   51569 command_runner.go:130] > kubeadm
	I0531 19:21:25.282425   51569 command_runner.go:130] > kubectl
	I0531 19:21:25.282429   51569 command_runner.go:130] > kubelet
	I0531 19:21:25.282470   51569 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:21:25.282522   51569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:21:25.291767   51569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0531 19:21:25.308411   51569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:21:25.326700   51569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0531 19:21:25.342991   51569 ssh_runner.go:195] Run: grep 192.168.39.78	control-plane.minikube.internal$ /etc/hosts
	I0531 19:21:25.346784   51569 command_runner.go:130] > 192.168.39.78	control-plane.minikube.internal
	I0531 19:21:25.346870   51569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:21:25.479741   51569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 19:21:25.494815   51569 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083 for IP: 192.168.39.78
	I0531 19:21:25.494871   51569 certs.go:194] generating shared ca certs ...
	I0531 19:21:25.494895   51569 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:21:25.495054   51569 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 19:21:25.495107   51569 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 19:21:25.495121   51569 certs.go:256] generating profile certs ...
	I0531 19:21:25.495205   51569 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/client.key
	I0531 19:21:25.495272   51569 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.key.825c99a6
	I0531 19:21:25.495334   51569 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.key
	I0531 19:21:25.495348   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0531 19:21:25.495372   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0531 19:21:25.495391   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0531 19:21:25.495407   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0531 19:21:25.495422   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0531 19:21:25.495438   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0531 19:21:25.495453   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0531 19:21:25.495467   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0531 19:21:25.495537   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 19:21:25.495581   51569 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 19:21:25.495595   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:21:25.495628   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 19:21:25.495662   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:21:25.495696   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 19:21:25.495752   51569 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:21:25.495794   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem -> /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.495813   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.495832   51569 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.496415   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:21:25.521768   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:21:25.545150   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:21:25.568155   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:21:25.592254   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0531 19:21:25.615899   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0531 19:21:25.642528   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:21:25.667166   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/multinode-988083/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:21:25.692434   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 19:21:25.716455   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 19:21:25.739890   51569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:21:25.762943   51569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:21:25.779233   51569 ssh_runner.go:195] Run: openssl version
	I0531 19:21:25.784993   51569 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0531 19:21:25.785183   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 19:21:25.795653   51569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.800103   51569 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.800152   51569 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.800199   51569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 19:21:25.805727   51569 command_runner.go:130] > 51391683
	I0531 19:21:25.805892   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 19:21:25.814772   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 19:21:25.825016   51569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.829323   51569 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.829493   51569 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.829528   51569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 19:21:25.834976   51569 command_runner.go:130] > 3ec20f2e
	I0531 19:21:25.835012   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:21:25.844231   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:21:25.855300   51569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.860594   51569 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.860857   51569 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.860917   51569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:21:25.866758   51569 command_runner.go:130] > b5213941
	I0531 19:21:25.866825   51569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:21:25.876992   51569 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:21:25.881663   51569 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:21:25.881689   51569 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0531 19:21:25.881698   51569 command_runner.go:130] > Device: 253,1	Inode: 2104342     Links: 1
	I0531 19:21:25.881707   51569 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0531 19:21:25.881717   51569 command_runner.go:130] > Access: 2024-05-31 19:15:13.981376064 +0000
	I0531 19:21:25.881728   51569 command_runner.go:130] > Modify: 2024-05-31 19:15:13.981376064 +0000
	I0531 19:21:25.881738   51569 command_runner.go:130] > Change: 2024-05-31 19:15:13.981376064 +0000
	I0531 19:21:25.881746   51569 command_runner.go:130] >  Birth: 2024-05-31 19:15:13.981376064 +0000
	I0531 19:21:25.881797   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 19:21:25.887520   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.887695   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 19:21:25.893098   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.893496   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 19:21:25.898902   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.898981   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 19:21:25.904411   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.904458   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 19:21:25.909979   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.910151   51569 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 19:21:25.915245   51569 command_runner.go:130] > Certificate will not expire
	I0531 19:21:25.915580   51569 kubeadm.go:391] StartCluster: {Name:multinode-988083 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
1 ClusterName:multinode-988083 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.78 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.176 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.207 Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:21:25.915776   51569 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:21:25.915814   51569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:21:25.952622   51569 command_runner.go:130] > 021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399
	I0531 19:21:25.952649   51569 command_runner.go:130] > 608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b
	I0531 19:21:25.952655   51569 command_runner.go:130] > 11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82
	I0531 19:21:25.952662   51569 command_runner.go:130] > ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f
	I0531 19:21:25.952667   51569 command_runner.go:130] > 7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6
	I0531 19:21:25.952672   51569 command_runner.go:130] > db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9
	I0531 19:21:25.952677   51569 command_runner.go:130] > 48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357
	I0531 19:21:25.952684   51569 command_runner.go:130] > 5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2
	I0531 19:21:25.952701   51569 cri.go:89] found id: "021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399"
	I0531 19:21:25.952709   51569 cri.go:89] found id: "608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b"
	I0531 19:21:25.952712   51569 cri.go:89] found id: "11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82"
	I0531 19:21:25.952715   51569 cri.go:89] found id: "ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f"
	I0531 19:21:25.952718   51569 cri.go:89] found id: "7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6"
	I0531 19:21:25.952720   51569 cri.go:89] found id: "db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9"
	I0531 19:21:25.952723   51569 cri.go:89] found id: "48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357"
	I0531 19:21:25.952725   51569 cri.go:89] found id: "5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2"
	I0531 19:21:25.952728   51569 cri.go:89] found id: ""
	I0531 19:21:25.952765   51569 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.948095142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717183514948073230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8428480-b4d9-497a-b078-753315e86575 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.948626277Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b8297cb1-160f-4878-bc74-30032eb03725 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.948684452Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b8297cb1-160f-4878-bc74-30032eb03725 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.949016835Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8113967a40174c0ba93e0f6e6fe15a0ada03d55aee473cc27c11f5d0a106f4e5,PodSandboxId:4c713433e3e576824b85b11b71e81c8d5e954e38fc60a4cc2b506fae906b5fa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717183326146963862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8,PodSandboxId:46f586e0ccd12f62627d2e7c76d431eb43cc37b0312fd43366284791957c9a18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717183292622882441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74,PodSandboxId:2f03e186c05036ee5f4ffa2fcba4997d8dc61721031c08c3e8471466645a8f45,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717183292421827026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565d8eebc79d641fedba16415bb9b8bfcf5862d315630ab715494026050b49b1,PodSandboxId:9c9eee77ba89037d110c97f9a171253f4334bb83ba374a1b85314bd706c629f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717183292367931798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},An
notations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab,PodSandboxId:f826816bfc53c4c0ea656f483a2eb2560df8c555c85ebbe4cfe62dc7e16efdd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717183292401156380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf,PodSandboxId:cb1ef0fda917b83e52872398958d9c9a8719425aebaf07ebbe44d7902f219d10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717183288585163444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d367849347,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4,PodSandboxId:646d4ae505009fade8ed00a3b268f16689551fdbc7736be9334cbc40bb6e9604,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717183288582784955,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cdd68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b
5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c,PodSandboxId:923ca500210d0c98bca08281bc96f6811d9aa5bdce02c4a8e924db5547d5cf70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717183288496488539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd,PodSandboxId:a69738f516151c20ca2dd16ad2866200a0932347c6540f69db55f99a15565e0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717183288500788425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3989aa18c530a2bfba424f3b380bda6c7e4d375a4670dbd853e74d77fa4b7880,PodSandboxId:1119cb1560382af18c19784adef4132c03214ec0d7ac051cf896788295bdc747,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717182990422419001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399,PodSandboxId:18c9eea462560cf0de5d57ccb708ef9d216ea8bdb0019ef1028c23f0520cadc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717182944253690932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b,PodSandboxId:ab3d659db26ecd2d35254a4b904a2fb82d1e53aa2d7f98a948d1e14c59da6787,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717182944202247045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},Annotations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82,PodSandboxId:caa9f33f0891b0be7803d8bbc979534e5fcfd3b8caf3312d95737abacde49b09,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717182942359012308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f,PodSandboxId:73f8d20d8c9264c0cee44a1a483986ecd515ca364bf36d5777854ee68a18ad6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717182937824621877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.kubernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9,PodSandboxId:0320af84cec4304373d413bd7c5f188ff0cc1b6c695de30b18eb579e6d15450f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717182917853426731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d3678493
47,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357,PodSandboxId:d3d9ef37fd07084f07445e721c06549f96280f769541950b95e8223475605698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182917822238797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cd
d68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6,PodSandboxId:52691cee69e6478d6925ba955f0764e4cb32f2b53382694bbdb54db4b402703f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717182917870977228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2,PodSandboxId:8923e5648e4dcc97133540cc544201c3addc8e341425c5dbe95e65211b2001bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182917789244828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b8297cb1-160f-4878-bc74-30032eb03725 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.994402379Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c70342ae-9eb3-4990-8ae9-9979fb7a985a name=/runtime.v1.RuntimeService/Version
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.994475148Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c70342ae-9eb3-4990-8ae9-9979fb7a985a name=/runtime.v1.RuntimeService/Version
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.995392461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6f215fa9-485d-4db4-b525-eb4a1b52b4de name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.995831325Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717183514995806082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6f215fa9-485d-4db4-b525-eb4a1b52b4de name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.996364688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dab51f7-053c-48d6-80a2-352d7ee83668 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.996418039Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dab51f7-053c-48d6-80a2-352d7ee83668 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:14 multinode-988083 crio[2885]: time="2024-05-31 19:25:14.996791873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8113967a40174c0ba93e0f6e6fe15a0ada03d55aee473cc27c11f5d0a106f4e5,PodSandboxId:4c713433e3e576824b85b11b71e81c8d5e954e38fc60a4cc2b506fae906b5fa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717183326146963862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8,PodSandboxId:46f586e0ccd12f62627d2e7c76d431eb43cc37b0312fd43366284791957c9a18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717183292622882441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74,PodSandboxId:2f03e186c05036ee5f4ffa2fcba4997d8dc61721031c08c3e8471466645a8f45,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717183292421827026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565d8eebc79d641fedba16415bb9b8bfcf5862d315630ab715494026050b49b1,PodSandboxId:9c9eee77ba89037d110c97f9a171253f4334bb83ba374a1b85314bd706c629f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717183292367931798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},An
notations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab,PodSandboxId:f826816bfc53c4c0ea656f483a2eb2560df8c555c85ebbe4cfe62dc7e16efdd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717183292401156380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf,PodSandboxId:cb1ef0fda917b83e52872398958d9c9a8719425aebaf07ebbe44d7902f219d10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717183288585163444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d367849347,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4,PodSandboxId:646d4ae505009fade8ed00a3b268f16689551fdbc7736be9334cbc40bb6e9604,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717183288582784955,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cdd68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b
5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c,PodSandboxId:923ca500210d0c98bca08281bc96f6811d9aa5bdce02c4a8e924db5547d5cf70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717183288496488539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd,PodSandboxId:a69738f516151c20ca2dd16ad2866200a0932347c6540f69db55f99a15565e0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717183288500788425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3989aa18c530a2bfba424f3b380bda6c7e4d375a4670dbd853e74d77fa4b7880,PodSandboxId:1119cb1560382af18c19784adef4132c03214ec0d7ac051cf896788295bdc747,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717182990422419001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399,PodSandboxId:18c9eea462560cf0de5d57ccb708ef9d216ea8bdb0019ef1028c23f0520cadc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717182944253690932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b,PodSandboxId:ab3d659db26ecd2d35254a4b904a2fb82d1e53aa2d7f98a948d1e14c59da6787,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717182944202247045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},Annotations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82,PodSandboxId:caa9f33f0891b0be7803d8bbc979534e5fcfd3b8caf3312d95737abacde49b09,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717182942359012308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f,PodSandboxId:73f8d20d8c9264c0cee44a1a483986ecd515ca364bf36d5777854ee68a18ad6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717182937824621877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.kubernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9,PodSandboxId:0320af84cec4304373d413bd7c5f188ff0cc1b6c695de30b18eb579e6d15450f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717182917853426731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d3678493
47,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357,PodSandboxId:d3d9ef37fd07084f07445e721c06549f96280f769541950b95e8223475605698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182917822238797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cd
d68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6,PodSandboxId:52691cee69e6478d6925ba955f0764e4cb32f2b53382694bbdb54db4b402703f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717182917870977228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2,PodSandboxId:8923e5648e4dcc97133540cc544201c3addc8e341425c5dbe95e65211b2001bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182917789244828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dab51f7-053c-48d6-80a2-352d7ee83668 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.038092434Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47cb62f0-8e48-4667-bbf4-ddd7bc2f73a1 name=/runtime.v1.RuntimeService/Version
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.038176704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47cb62f0-8e48-4667-bbf4-ddd7bc2f73a1 name=/runtime.v1.RuntimeService/Version
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.039558093Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25df5161-4481-4438-a287-8d893a43efc4 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.040439894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717183515040413038,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25df5161-4481-4438-a287-8d893a43efc4 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.041035226Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ee3257e7-f381-4e94-b9d5-b3ad7fe17a05 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.041145049Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ee3257e7-f381-4e94-b9d5-b3ad7fe17a05 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.041557917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8113967a40174c0ba93e0f6e6fe15a0ada03d55aee473cc27c11f5d0a106f4e5,PodSandboxId:4c713433e3e576824b85b11b71e81c8d5e954e38fc60a4cc2b506fae906b5fa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717183326146963862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8,PodSandboxId:46f586e0ccd12f62627d2e7c76d431eb43cc37b0312fd43366284791957c9a18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717183292622882441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74,PodSandboxId:2f03e186c05036ee5f4ffa2fcba4997d8dc61721031c08c3e8471466645a8f45,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717183292421827026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565d8eebc79d641fedba16415bb9b8bfcf5862d315630ab715494026050b49b1,PodSandboxId:9c9eee77ba89037d110c97f9a171253f4334bb83ba374a1b85314bd706c629f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717183292367931798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},An
notations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab,PodSandboxId:f826816bfc53c4c0ea656f483a2eb2560df8c555c85ebbe4cfe62dc7e16efdd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717183292401156380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf,PodSandboxId:cb1ef0fda917b83e52872398958d9c9a8719425aebaf07ebbe44d7902f219d10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717183288585163444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d367849347,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4,PodSandboxId:646d4ae505009fade8ed00a3b268f16689551fdbc7736be9334cbc40bb6e9604,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717183288582784955,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cdd68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b
5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c,PodSandboxId:923ca500210d0c98bca08281bc96f6811d9aa5bdce02c4a8e924db5547d5cf70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717183288496488539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd,PodSandboxId:a69738f516151c20ca2dd16ad2866200a0932347c6540f69db55f99a15565e0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717183288500788425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3989aa18c530a2bfba424f3b380bda6c7e4d375a4670dbd853e74d77fa4b7880,PodSandboxId:1119cb1560382af18c19784adef4132c03214ec0d7ac051cf896788295bdc747,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717182990422419001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399,PodSandboxId:18c9eea462560cf0de5d57ccb708ef9d216ea8bdb0019ef1028c23f0520cadc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717182944253690932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b,PodSandboxId:ab3d659db26ecd2d35254a4b904a2fb82d1e53aa2d7f98a948d1e14c59da6787,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717182944202247045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},Annotations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82,PodSandboxId:caa9f33f0891b0be7803d8bbc979534e5fcfd3b8caf3312d95737abacde49b09,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717182942359012308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f,PodSandboxId:73f8d20d8c9264c0cee44a1a483986ecd515ca364bf36d5777854ee68a18ad6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717182937824621877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.kubernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9,PodSandboxId:0320af84cec4304373d413bd7c5f188ff0cc1b6c695de30b18eb579e6d15450f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717182917853426731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d3678493
47,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357,PodSandboxId:d3d9ef37fd07084f07445e721c06549f96280f769541950b95e8223475605698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182917822238797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cd
d68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6,PodSandboxId:52691cee69e6478d6925ba955f0764e4cb32f2b53382694bbdb54db4b402703f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717182917870977228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2,PodSandboxId:8923e5648e4dcc97133540cc544201c3addc8e341425c5dbe95e65211b2001bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182917789244828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ee3257e7-f381-4e94-b9d5-b3ad7fe17a05 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.088967145Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=45fa944b-e67c-46aa-b479-9c17d7c55a8e name=/runtime.v1.RuntimeService/Version
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.089040977Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=45fa944b-e67c-46aa-b479-9c17d7c55a8e name=/runtime.v1.RuntimeService/Version
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.090931105Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=885425e6-6067-416f-b3ff-16458606fd51 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.091402572Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717183515091379132,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143025,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=885425e6-6067-416f-b3ff-16458606fd51 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.092379188Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a82e4374-19bc-46df-bd42-f114cc610a3c name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.092478262Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a82e4374-19bc-46df-bd42-f114cc610a3c name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:25:15 multinode-988083 crio[2885]: time="2024-05-31 19:25:15.092804841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8113967a40174c0ba93e0f6e6fe15a0ada03d55aee473cc27c11f5d0a106f4e5,PodSandboxId:4c713433e3e576824b85b11b71e81c8d5e954e38fc60a4cc2b506fae906b5fa0,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1717183326146963862,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8,PodSandboxId:46f586e0ccd12f62627d2e7c76d431eb43cc37b0312fd43366284791957c9a18,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_RUNNING,CreatedAt:1717183292622882441,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74,PodSandboxId:2f03e186c05036ee5f4ffa2fcba4997d8dc61721031c08c3e8471466645a8f45,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717183292421827026,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:565d8eebc79d641fedba16415bb9b8bfcf5862d315630ab715494026050b49b1,PodSandboxId:9c9eee77ba89037d110c97f9a171253f4334bb83ba374a1b85314bd706c629f4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1717183292367931798,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},An
notations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab,PodSandboxId:f826816bfc53c4c0ea656f483a2eb2560df8c555c85ebbe4cfe62dc7e16efdd1,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_RUNNING,CreatedAt:1717183292401156380,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf,PodSandboxId:cb1ef0fda917b83e52872398958d9c9a8719425aebaf07ebbe44d7902f219d10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717183288585163444,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d367849347,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4,PodSandboxId:646d4ae505009fade8ed00a3b268f16689551fdbc7736be9334cbc40bb6e9604,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717183288582784955,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cdd68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b
5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c,PodSandboxId:923ca500210d0c98bca08281bc96f6811d9aa5bdce02c4a8e924db5547d5cf70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717183288496488539,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:map[string]string{io.kubernetes.container.hash: ae702264,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd,PodSandboxId:a69738f516151c20ca2dd16ad2866200a0932347c6540f69db55f99a15565e0e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717183288500788425,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3989aa18c530a2bfba424f3b380bda6c7e4d375a4670dbd853e74d77fa4b7880,PodSandboxId:1119cb1560382af18c19784adef4132c03214ec0d7ac051cf896788295bdc747,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1717182990422419001,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-t6fcj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 222bc4e6-48ac-49a5-ac08-029712c2c95b,},Annotations:map[string]string{io.kubernetes.container.hash: e27bf82a,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399,PodSandboxId:18c9eea462560cf0de5d57ccb708ef9d216ea8bdb0019ef1028c23f0520cadc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717182944253690932,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-gn5s6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1c95c7e-5056-4166-a4e5-64843b6c0d1f,},Annotations:map[string]string{io.kubernetes.container.hash: 93bb25be,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:608df439d70ed8df5a810cf3089efb9d6ab7154507def165ba420fb319535f6b,PodSandboxId:ab3d659db26ecd2d35254a4b904a2fb82d1e53aa2d7f98a948d1e14c59da6787,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717182944202247045,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: ee986eb4-5a08-4aee-9fbd-1180c6b8d145,},Annotations:map[string]string{io.kubernetes.container.hash: 28fa8aa0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82,PodSandboxId:caa9f33f0891b0be7803d8bbc979534e5fcfd3b8caf3312d95737abacde49b09,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f,State:CONTAINER_EXITED,CreatedAt:1717182942359012308,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zcptz,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: b40f7eb3-69bd-4fed-b782-6ddec667356d,},Annotations:map[string]string{io.kubernetes.container.hash: f8daa519,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f,PodSandboxId:73f8d20d8c9264c0cee44a1a483986ecd515ca364bf36d5777854ee68a18ad6d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717182937824621877,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8cnnb,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 45d80625-5a0f-4e15-96fd-dbecc3b1dcc9,},Annotations:map[string]string{io.kubernetes.container.hash: 748babe8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9,PodSandboxId:0320af84cec4304373d413bd7c5f188ff0cc1b6c695de30b18eb579e6d15450f,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717182917853426731,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b6bd12599bc9131b4d5884d3678493
47,},Annotations:map[string]string{io.kubernetes.container.hash: 9c8bc720,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357,PodSandboxId:d3d9ef37fd07084f07445e721c06549f96280f769541950b95e8223475605698,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717182917822238797,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9755ed1e8092104cd
d68a861fcb06e8,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6,PodSandboxId:52691cee69e6478d6925ba955f0764e4cb32f2b53382694bbdb54db4b402703f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717182917870977228,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 821bb82ccf5670185394082987f2e1c4,
},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2,PodSandboxId:8923e5648e4dcc97133540cc544201c3addc8e341425c5dbe95e65211b2001bc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717182917789244828,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-988083,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50ea6493bc5174ee9e38ec8af47b27ac,},Annotations:m
ap[string]string{io.kubernetes.container.hash: ae702264,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a82e4374-19bc-46df-bd42-f114cc610a3c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8113967a40174       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   4c713433e3e57       busybox-fc5497c4f-t6fcj
	bcbe244eb4305       ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f                                      3 minutes ago       Running             kindnet-cni               1                   46f586e0ccd12       kindnet-zcptz
	374c544c21751       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   1                   2f03e186c0503       coredns-7db6d8ff4d-gn5s6
	35ccdd242695a       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      3 minutes ago       Running             kube-proxy                1                   f826816bfc53c       kube-proxy-8cnnb
	565d8eebc79d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   9c9eee77ba890       storage-provisioner
	7eba17d8c1a7a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      3 minutes ago       Running             etcd                      1                   cb1ef0fda917b       etcd-multinode-988083
	771ce65646831       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      3 minutes ago       Running             kube-controller-manager   1                   646d4ae505009       kube-controller-manager-multinode-988083
	139bda4e9c6e8       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      3 minutes ago       Running             kube-scheduler            1                   a69738f516151       kube-scheduler-multinode-988083
	1c238706b8ea8       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      3 minutes ago       Running             kube-apiserver            1                   923ca500210d0       kube-apiserver-multinode-988083
	3989aa18c530a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   1119cb1560382       busybox-fc5497c4f-t6fcj
	021a7951eea76       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      9 minutes ago       Exited              coredns                   0                   18c9eea462560       coredns-7db6d8ff4d-gn5s6
	608df439d70ed       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   ab3d659db26ec       storage-provisioner
	11fcd8b1bb42b       docker.io/kindest/kindnetd@sha256:2b34f64609858041e706963bcd73273c087360ca240f1f9b37db6f148edb1266    9 minutes ago       Exited              kindnet-cni               0                   caa9f33f0891b       kindnet-zcptz
	ff4fdd6109937       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd                                      9 minutes ago       Exited              kube-proxy                0                   73f8d20d8c926       kube-proxy-8cnnb
	7e3b7dd02f327       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035                                      9 minutes ago       Exited              kube-scheduler            0                   52691cee69e64       kube-scheduler-multinode-988083
	db1fde1550887       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      9 minutes ago       Exited              etcd                      0                   0320af84cec43       etcd-multinode-988083
	48c2219950201       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c                                      9 minutes ago       Exited              kube-controller-manager   0                   d3d9ef37fd070       kube-controller-manager-multinode-988083
	5180205e9c241       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a                                      9 minutes ago       Exited              kube-apiserver            0                   8923e5648e4dc       kube-apiserver-multinode-988083
	
	
	==> coredns [021a7951eea768ae012516ba22a17cf15a6665dcafce200703ff1e1cff6cb399] <==
	[INFO] 10.244.1.2:43548 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001706273s
	[INFO] 10.244.1.2:60115 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109261s
	[INFO] 10.244.1.2:60707 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079029s
	[INFO] 10.244.1.2:36626 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001273064s
	[INFO] 10.244.1.2:52390 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000225703s
	[INFO] 10.244.1.2:56106 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092886s
	[INFO] 10.244.1.2:56334 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091338s
	[INFO] 10.244.0.3:45555 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115436s
	[INFO] 10.244.0.3:56080 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000031809s
	[INFO] 10.244.0.3:55687 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000108909s
	[INFO] 10.244.0.3:35835 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102581s
	[INFO] 10.244.1.2:51436 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132426s
	[INFO] 10.244.1.2:52123 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000151267s
	[INFO] 10.244.1.2:46390 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000166264s
	[INFO] 10.244.1.2:55230 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092441s
	[INFO] 10.244.0.3:35015 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000254081s
	[INFO] 10.244.0.3:46384 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00009303s
	[INFO] 10.244.0.3:49629 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000079659s
	[INFO] 10.244.0.3:45631 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000196091s
	[INFO] 10.244.1.2:55118 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011302s
	[INFO] 10.244.1.2:46076 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100271s
	[INFO] 10.244.1.2:32855 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071938s
	[INFO] 10.244.1.2:46235 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000105924s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [374c544c21751512ef0ddacb9a3b040296a0b0f47d367bc714ae8b61a2de9a74] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38299 - 63203 "HINFO IN 2236214529243632812.2977748041558225387. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.092639958s
	
	
	==> describe nodes <==
	Name:               multinode-988083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-988083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=multinode-988083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_31T19_15_24_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 19:15:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-988083
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:25:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 19:21:31 +0000   Fri, 31 May 2024 19:15:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 19:21:31 +0000   Fri, 31 May 2024 19:15:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 19:21:31 +0000   Fri, 31 May 2024 19:15:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 19:21:31 +0000   Fri, 31 May 2024 19:15:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.78
	  Hostname:    multinode-988083
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1f0d3ea92bfc420b9fe05a28ebe0054f
	  System UUID:                1f0d3ea9-2bfc-420b-9fe0-5a28ebe0054f
	  Boot ID:                    b045e951-cb60-4f76-8208-2f943082b070
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-t6fcj                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m48s
	  kube-system                 coredns-7db6d8ff4d-gn5s6                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m38s
	  kube-system                 etcd-multinode-988083                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m53s
	  kube-system                 kindnet-zcptz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m38s
	  kube-system                 kube-apiserver-multinode-988083             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 kube-controller-manager-multinode-988083    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 kube-proxy-8cnnb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                 kube-scheduler-multinode-988083             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m52s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  Starting                 3m42s                  kube-proxy       
	  Normal  Starting                 9m53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     9m52s                  kubelet          Node multinode-988083 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  9m52s                  kubelet          Node multinode-988083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s                  kubelet          Node multinode-988083 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m39s                  node-controller  Node multinode-988083 event: Registered Node multinode-988083 in Controller
	  Normal  NodeReady                9m32s                  kubelet          Node multinode-988083 status is now: NodeReady
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node multinode-988083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node multinode-988083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node multinode-988083 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m31s                  node-controller  Node multinode-988083 event: Registered Node multinode-988083 in Controller
	
	
	Name:               multinode-988083-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-988083-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=138945beedc8425ce94bc8c3f0970324094b8404
	                    minikube.k8s.io/name=multinode-988083
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_31T19_22_12_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 19:22:11 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-988083-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:22:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 31 May 2024 19:22:42 +0000   Fri, 31 May 2024 19:23:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 31 May 2024 19:22:42 +0000   Fri, 31 May 2024 19:23:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 31 May 2024 19:22:42 +0000   Fri, 31 May 2024 19:23:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 31 May 2024 19:22:42 +0000   Fri, 31 May 2024 19:23:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    multinode-988083-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2259ed64316412d89f8d1d302d9cbe5
	  System UUID:                d2259ed6-4316-412d-89f8-d1d302d9cbe5
	  Boot ID:                    79876d25-b44a-4fc8-8ee8-83f2b763e7a2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-blqgt    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-xx6dr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m
	  kube-system                 kube-proxy-9htg9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  Starting                 8m54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  9m1s (x2 over 9m1s)  kubelet          Node multinode-988083-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m1s (x2 over 9m1s)  kubelet          Node multinode-988083-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m1s (x2 over 9m1s)  kubelet          Node multinode-988083-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m50s                kubelet          Node multinode-988083-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m4s (x2 over 3m4s)  kubelet          Node multinode-988083-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x2 over 3m4s)  kubelet          Node multinode-988083-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x2 over 3m4s)  kubelet          Node multinode-988083-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m55s                kubelet          Node multinode-988083-m02 status is now: NodeReady
	  Normal  NodeNotReady             101s                 node-controller  Node multinode-988083-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.065541] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.059287] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.187155] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.111178] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.259616] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +4.118457] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.718279] systemd-fstab-generator[952]: Ignoring "noauto" option for root device
	[  +0.062283] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.983017] systemd-fstab-generator[1287]: Ignoring "noauto" option for root device
	[  +0.074714] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.890375] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.764667] systemd-fstab-generator[1484]: Ignoring "noauto" option for root device
	[  +5.648919] kauditd_printk_skb: 56 callbacks suppressed
	[May31 19:16] kauditd_printk_skb: 16 callbacks suppressed
	[May31 19:21] systemd-fstab-generator[2803]: Ignoring "noauto" option for root device
	[  +0.153744] systemd-fstab-generator[2815]: Ignoring "noauto" option for root device
	[  +0.182260] systemd-fstab-generator[2830]: Ignoring "noauto" option for root device
	[  +0.140102] systemd-fstab-generator[2842]: Ignoring "noauto" option for root device
	[  +0.268583] systemd-fstab-generator[2870]: Ignoring "noauto" option for root device
	[  +0.687886] systemd-fstab-generator[2970]: Ignoring "noauto" option for root device
	[  +2.226217] systemd-fstab-generator[3095]: Ignoring "noauto" option for root device
	[  +4.685331] kauditd_printk_skb: 184 callbacks suppressed
	[ +11.603475] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.509237] systemd-fstab-generator[3902]: Ignoring "noauto" option for root device
	[May31 19:22] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [7eba17d8c1a7ab5edd3d9b44528b5c075e4844fcb027c42ca17481bb6e63d8cf] <==
	{"level":"info","ts":"2024-05-31T19:21:28.926107Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","added-peer-id":"83fde65c75733ea3","added-peer-peer-urls":["https://192.168.39.78:2380"]}
	{"level":"info","ts":"2024-05-31T19:21:28.926272Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"254f9db842b1870b","local-member-id":"83fde65c75733ea3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:21:28.927193Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:21:28.939194Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:21:28.940127Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:21:28.940173Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:21:28.975554Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-31T19:21:28.975866Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"83fde65c75733ea3","initial-advertise-peer-urls":["https://192.168.39.78:2380"],"listen-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.78:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-31T19:21:28.975944Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-31T19:21:28.976071Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-31T19:21:28.976096Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-31T19:21:30.283996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-31T19:21:30.284096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-31T19:21:30.28416Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgPreVoteResp from 83fde65c75733ea3 at term 2"}
	{"level":"info","ts":"2024-05-31T19:21:30.284197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became candidate at term 3"}
	{"level":"info","ts":"2024-05-31T19:21:30.284226Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 received MsgVoteResp from 83fde65c75733ea3 at term 3"}
	{"level":"info","ts":"2024-05-31T19:21:30.284252Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83fde65c75733ea3 became leader at term 3"}
	{"level":"info","ts":"2024-05-31T19:21:30.284282Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83fde65c75733ea3 elected leader 83fde65c75733ea3 at term 3"}
	{"level":"info","ts":"2024-05-31T19:21:30.288783Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"83fde65c75733ea3","local-member-attributes":"{Name:multinode-988083 ClientURLs:[https://192.168.39.78:2379]}","request-path":"/0/members/83fde65c75733ea3/attributes","cluster-id":"254f9db842b1870b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-31T19:21:30.288995Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-31T19:21:30.289027Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-31T19:21:30.289553Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-31T19:21:30.2896Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-31T19:21:30.291395Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.78:2379"}
	{"level":"info","ts":"2024-05-31T19:21:30.291492Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [db1fde155088750e664c91dbc90c917793a04d7ec491cd1b43414740d8efe8b9] <==
	{"level":"warn","ts":"2024-05-31T19:15:20.927689Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-31T19:15:20.506585Z","time spent":"421.092394ms","remote":"127.0.0.1:35232","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":0,"response size":27,"request content":"key:\"/registry/resourcequotas/kube-system/\" range_end:\"/registry/resourcequotas/kube-system0\" "}
	{"level":"warn","ts":"2024-05-31T19:15:20.927841Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.997769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-05-31T19:15:20.927879Z","caller":"traceutil/trace.go:171","msg":"trace[1517232406] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:17; }","duration":"154.05085ms","start":"2024-05-31T19:15:20.773823Z","end":"2024-05-31T19:15:20.927874Z","steps":["trace[1517232406] 'agreement among raft nodes before linearized reading'  (duration: 154.006292ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:16:15.157085Z","caller":"traceutil/trace.go:171","msg":"trace[10928778] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"219.182497ms","start":"2024-05-31T19:16:14.937865Z","end":"2024-05-31T19:16:15.157048Z","steps":["trace[10928778] 'process raft request'  (duration: 144.97588ms)","trace[10928778] 'compare'  (duration: 74.095272ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-31T19:16:15.157828Z","caller":"traceutil/trace.go:171","msg":"trace[812641342] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"183.982685ms","start":"2024-05-31T19:16:14.973835Z","end":"2024-05-31T19:16:15.157818Z","steps":["trace[812641342] 'process raft request'  (duration: 183.680901ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-31T19:16:17.603573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.157621ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4513609375405738155 > lease_revoke:<id:3ea38fd013ee882f>","response":"size:28"}
	{"level":"info","ts":"2024-05-31T19:16:17.603715Z","caller":"traceutil/trace.go:171","msg":"trace[492931041] linearizableReadLoop","detail":"{readStateIndex:502; appliedIndex:501; }","duration":"199.187142ms","start":"2024-05-31T19:16:17.404512Z","end":"2024-05-31T19:16:17.603699Z","steps":["trace[492931041] 'read index received'  (duration: 15.44467ms)","trace[492931041] 'applied index is now lower than readState.Index'  (duration: 183.740726ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-31T19:16:17.603836Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.292998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-988083-m02\" ","response":"range_response_count:1 size:3273"}
	{"level":"info","ts":"2024-05-31T19:16:17.603891Z","caller":"traceutil/trace.go:171","msg":"trace[984424241] range","detail":"{range_begin:/registry/minions/multinode-988083-m02; range_end:; response_count:1; response_revision:479; }","duration":"199.400521ms","start":"2024-05-31T19:16:17.404479Z","end":"2024-05-31T19:16:17.60388Z","steps":["trace[984424241] 'agreement among raft nodes before linearized reading'  (duration: 199.265106ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:16:59.095581Z","caller":"traceutil/trace.go:171","msg":"trace[1955100436] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"184.424604ms","start":"2024-05-31T19:16:58.911143Z","end":"2024-05-31T19:16:59.095568Z","steps":["trace[1955100436] 'process raft request'  (duration: 184.391065ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:16:59.095815Z","caller":"traceutil/trace.go:171","msg":"trace[1033148884] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"255.185052ms","start":"2024-05-31T19:16:58.840614Z","end":"2024-05-31T19:16:59.095799Z","steps":["trace[1033148884] 'process raft request'  (duration: 220.547209ms)","trace[1033148884] 'compare'  (duration: 34.291818ms)"],"step_count":2}
	{"level":"info","ts":"2024-05-31T19:16:59.095948Z","caller":"traceutil/trace.go:171","msg":"trace[293603268] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"248.510828ms","start":"2024-05-31T19:16:58.847431Z","end":"2024-05-31T19:16:59.095942Z","steps":["trace[293603268] 'read index received'  (duration: 213.739613ms)","trace[293603268] 'applied index is now lower than readState.Index'  (duration: 34.770676ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-31T19:16:59.096061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.617354ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-988083-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-05-31T19:16:59.096098Z","caller":"traceutil/trace.go:171","msg":"trace[37434597] range","detail":"{range_begin:/registry/minions/multinode-988083-m03; range_end:; response_count:1; response_revision:574; }","duration":"248.694831ms","start":"2024-05-31T19:16:58.847397Z","end":"2024-05-31T19:16:59.096092Z","steps":["trace[37434597] 'agreement among raft nodes before linearized reading'  (duration: 248.584034ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:17:41.776669Z","caller":"traceutil/trace.go:171","msg":"trace[536818303] transaction","detail":"{read_only:false; response_revision:686; number_of_response:1; }","duration":"116.65299ms","start":"2024-05-31T19:17:41.659998Z","end":"2024-05-31T19:17:41.776651Z","steps":["trace[536818303] 'process raft request'  (duration: 116.468685ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-31T19:19:52.789278Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-31T19:19:52.789527Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-988083","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	{"level":"warn","ts":"2024-05-31T19:19:52.789636Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:19:52.789768Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:19:52.882417Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:19:52.882545Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.78:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-31T19:19:52.882618Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"83fde65c75733ea3","current-leader-member-id":"83fde65c75733ea3"}
	{"level":"info","ts":"2024-05-31T19:19:52.884839Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-31T19:19:52.884985Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.78:2380"}
	{"level":"info","ts":"2024-05-31T19:19:52.88502Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-988083","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.78:2380"],"advertise-client-urls":["https://192.168.39.78:2379"]}
	
	
	==> kernel <==
	 19:25:15 up 10 min,  0 users,  load average: 0.27, 0.31, 0.22
	Linux multinode-988083 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [11fcd8b1bb42ba028849b046d8e431239599a6dc9af9cd9241f11f1b41361c82] <==
	I0531 19:19:03.376895       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:19:13.386237       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:19:13.386368       1 main.go:227] handling current node
	I0531 19:19:13.386427       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:19:13.386452       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:19:13.386577       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:19:13.386599       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:19:23.401979       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:19:23.402019       1 main.go:227] handling current node
	I0531 19:19:23.402030       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:19:23.402035       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:19:23.402172       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:19:23.402198       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:19:33.414903       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:19:33.414983       1 main.go:227] handling current node
	I0531 19:19:33.415090       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:19:33.415117       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:19:33.415247       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:19:33.415267       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	I0531 19:19:43.450434       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:19:43.450538       1 main.go:227] handling current node
	I0531 19:19:43.450567       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:19:43.450585       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:19:43.450730       1 main.go:223] Handling node with IPs: map[192.168.39.207:{}]
	I0531 19:19:43.450751       1 main.go:250] Node multinode-988083-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [bcbe244eb4305858326fe0076eb7968c3235a12c5a3078a51d991d6490d14cb8] <==
	I0531 19:24:13.544906       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:24:23.551880       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:24:23.551921       1 main.go:227] handling current node
	I0531 19:24:23.551930       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:24:23.551935       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:24:33.556337       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:24:33.556577       1 main.go:227] handling current node
	I0531 19:24:33.556620       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:24:33.556640       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:24:43.561697       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:24:43.561750       1 main.go:227] handling current node
	I0531 19:24:43.561762       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:24:43.561769       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:24:53.566147       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:24:53.566185       1 main.go:227] handling current node
	I0531 19:24:53.566195       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:24:53.566200       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:25:03.577916       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:25:03.577955       1 main.go:227] handling current node
	I0531 19:25:03.577966       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:25:03.577970       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	I0531 19:25:13.582829       1 main.go:223] Handling node with IPs: map[192.168.39.78:{}]
	I0531 19:25:13.582924       1 main.go:227] handling current node
	I0531 19:25:13.582948       1 main.go:223] Handling node with IPs: map[192.168.39.176:{}]
	I0531 19:25:13.582965       1 main.go:250] Node multinode-988083-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1c238706b8ea87c60c9121cb15ec72647c8fbf86c6379dc7766d8c142fe6750c] <==
	I0531 19:21:31.516413       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0531 19:21:31.575829       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0531 19:21:31.576990       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0531 19:21:31.577049       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0531 19:21:31.577206       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0531 19:21:31.577256       1 aggregator.go:165] initial CRD sync complete...
	I0531 19:21:31.577263       1 autoregister_controller.go:141] Starting autoregister controller
	I0531 19:21:31.577267       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0531 19:21:31.578732       1 shared_informer.go:320] Caches are synced for configmaps
	I0531 19:21:31.586763       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:21:31.652350       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0531 19:21:31.660005       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0531 19:21:31.660040       1 policy_source.go:224] refreshing policies
	I0531 19:21:31.675338       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 19:21:31.675681       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:21:31.680330       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0531 19:21:31.683607       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:21:32.493233       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:21:33.822432       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0531 19:21:33.947592       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0531 19:21:33.959457       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0531 19:21:34.020967       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:21:34.026026       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 19:21:43.845432       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 19:21:43.877807       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [5180205e9c241baf89ec6b3833b30db96f9646a5f04267360b9abf934c269af2] <==
	I0531 19:19:52.823948       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0531 19:19:52.823959       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0531 19:19:52.823991       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0531 19:19:52.824032       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0531 19:19:52.824058       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0531 19:19:52.824237       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0531 19:19:52.824427       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0531 19:19:52.825604       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	I0531 19:19:52.825698       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	E0531 19:19:52.826132       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0531 19:19:52.826232       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0531 19:19:52.826388       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0531 19:19:52.826478       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0531 19:19:52.826553       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0531 19:19:52.826645       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.826688       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828095       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828210       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828645       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828770       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828880       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.828995       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.829121       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:19:52.829241       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0531 19:19:52.832262       1 controller.go:157] Shutting down quota evaluator
	
	
	==> kube-controller-manager [48c2219950201b49cf07e9e1befc1991fd3252b80978856d6c6500e995c45357] <==
	I0531 19:16:15.159762       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m02\" does not exist"
	I0531 19:16:15.224390       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m02" podCIDRs=["10.244.1.0/24"]
	I0531 19:16:16.522992       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-988083-m02"
	I0531 19:16:25.247671       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:16:27.468267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.985787ms"
	I0531 19:16:27.480547       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.719584ms"
	I0531 19:16:27.480975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.58µs"
	I0531 19:16:27.481105       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.051µs"
	I0531 19:16:27.489814       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="75.16µs"
	I0531 19:16:31.345910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.996712ms"
	I0531 19:16:31.351051       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.084452ms"
	I0531 19:16:31.351267       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.182µs"
	I0531 19:16:59.098860       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:16:59.098977       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m03\" does not exist"
	I0531 19:16:59.112099       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m03" podCIDRs=["10.244.2.0/24"]
	I0531 19:17:01.544777       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-988083-m03"
	I0531 19:17:09.249119       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:17:37.368257       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:17:38.464763       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:17:38.464822       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m03\" does not exist"
	I0531 19:17:38.483785       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m03" podCIDRs=["10.244.3.0/24"]
	I0531 19:17:47.187647       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:18:31.598872       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m03"
	I0531 19:18:31.657562       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.538203ms"
	I0531 19:18:31.657702       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.684µs"
	
	
	==> kube-controller-manager [771ce6564683154c641354e61cda142a19814c48e6adb360a67c3a08948861a4] <==
	I0531 19:22:11.858053       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m02\" does not exist"
	I0531 19:22:11.870129       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m02" podCIDRs=["10.244.1.0/24"]
	I0531 19:22:12.767909       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.998µs"
	I0531 19:22:12.781039       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.042µs"
	I0531 19:22:12.814224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="126.207µs"
	I0531 19:22:12.821469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="183.859µs"
	I0531 19:22:12.828970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.134µs"
	I0531 19:22:14.698327       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.501µs"
	I0531 19:22:20.113903       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:22:20.133464       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.019µs"
	I0531 19:22:20.147378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.551µs"
	I0531 19:22:24.102583       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.988515ms"
	I0531 19:22:24.103263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="148.546µs"
	I0531 19:22:38.415221       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:22:39.693410       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:22:39.693459       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-988083-m03\" does not exist"
	I0531 19:22:39.702445       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-988083-m03" podCIDRs=["10.244.2.0/24"]
	I0531 19:22:48.722859       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:22:53.988486       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-988083-m02"
	I0531 19:23:34.121185       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.214616ms"
	I0531 19:23:34.121495       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.867µs"
	I0531 19:23:43.851001       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2v9gf"
	I0531 19:23:43.879050       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-2v9gf"
	I0531 19:23:43.879109       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2c6qt"
	I0531 19:23:43.905348       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-2c6qt"
	
	
	==> kube-proxy [35ccdd242695a35c5e28c3d20ce5f551e4bd3cf37e0e3c9cb280ad7c7f2e74ab] <==
	I0531 19:21:32.683675       1 server_linux.go:69] "Using iptables proxy"
	I0531 19:21:32.701811       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.78"]
	I0531 19:21:32.811283       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0531 19:21:32.811443       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0531 19:21:32.811482       1 server_linux.go:165] "Using iptables Proxier"
	I0531 19:21:32.817570       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:21:32.817830       1 server.go:872] "Version info" version="v1.30.1"
	I0531 19:21:32.817862       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:21:32.819179       1 config.go:192] "Starting service config controller"
	I0531 19:21:32.819232       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0531 19:21:32.819260       1 config.go:101] "Starting endpoint slice config controller"
	I0531 19:21:32.819282       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0531 19:21:32.826005       1 config.go:319] "Starting node config controller"
	I0531 19:21:32.826047       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0531 19:21:32.920451       1 shared_informer.go:320] Caches are synced for service config
	I0531 19:21:32.925765       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0531 19:21:32.926125       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [ff4fdd6109937a7f0d8048d021562addfba2662f570c5b3eb55b223c7d214f2f] <==
	I0531 19:15:38.227526       1 server_linux.go:69] "Using iptables proxy"
	I0531 19:15:38.242932       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.78"]
	I0531 19:15:38.336613       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0531 19:15:38.336654       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0531 19:15:38.336682       1 server_linux.go:165] "Using iptables Proxier"
	I0531 19:15:38.339959       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:15:38.340183       1 server.go:872] "Version info" version="v1.30.1"
	I0531 19:15:38.340196       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:15:38.341913       1 config.go:192] "Starting service config controller"
	I0531 19:15:38.341940       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0531 19:15:38.341967       1 config.go:101] "Starting endpoint slice config controller"
	I0531 19:15:38.341971       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0531 19:15:38.342785       1 config.go:319] "Starting node config controller"
	I0531 19:15:38.342792       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0531 19:15:38.442419       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0531 19:15:38.442495       1 shared_informer.go:320] Caches are synced for service config
	I0531 19:15:38.444381       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [139bda4e9c6e8970a0622b94abe5bb1045277435db6209ac6414d270691719fd] <==
	I0531 19:21:29.652867       1 serving.go:380] Generated self-signed cert in-memory
	W0531 19:21:31.536076       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0531 19:21:31.537183       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 19:21:31.537251       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0531 19:21:31.537279       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0531 19:21:31.577804       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0531 19:21:31.578788       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:21:31.583390       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0531 19:21:31.584407       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0531 19:21:31.587955       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0531 19:21:31.584433       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0531 19:21:31.688747       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7e3b7dd02f327a20e77b3111b262b53dc21a01ee00defd6ae4f374286ce4d2f6] <==
	E0531 19:15:20.416926       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:15:20.416973       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:15:20.416981       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 19:15:20.417055       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:15:20.417085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 19:15:20.417185       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 19:15:20.417214       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 19:15:21.289999       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0531 19:15:21.290032       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:15:21.401710       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:15:21.401834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 19:15:21.434360       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:15:21.434465       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 19:15:21.496821       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:15:21.497062       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0531 19:15:21.514099       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:15:21.514175       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 19:15:21.558217       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:15:21.558421       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 19:15:21.565720       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 19:15:21.565990       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0531 19:15:21.567222       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:15:21.567376       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0531 19:15:23.997193       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0531 19:19:52.797201       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.834842    3102 topology_manager.go:215] "Topology Admit Handler" podUID="ee986eb4-5a08-4aee-9fbd-1180c6b8d145" podNamespace="kube-system" podName="storage-provisioner"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.834901    3102 topology_manager.go:215] "Topology Admit Handler" podUID="222bc4e6-48ac-49a5-ac08-029712c2c95b" podNamespace="default" podName="busybox-fc5497c4f-t6fcj"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.850630    3102 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871776    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b40f7eb3-69bd-4fed-b782-6ddec667356d-xtables-lock\") pod \"kindnet-zcptz\" (UID: \"b40f7eb3-69bd-4fed-b782-6ddec667356d\") " pod="kube-system/kindnet-zcptz"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871818    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b40f7eb3-69bd-4fed-b782-6ddec667356d-lib-modules\") pod \"kindnet-zcptz\" (UID: \"b40f7eb3-69bd-4fed-b782-6ddec667356d\") " pod="kube-system/kindnet-zcptz"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871835    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45d80625-5a0f-4e15-96fd-dbecc3b1dcc9-xtables-lock\") pod \"kube-proxy-8cnnb\" (UID: \"45d80625-5a0f-4e15-96fd-dbecc3b1dcc9\") " pod="kube-system/kube-proxy-8cnnb"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871877    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ee986eb4-5a08-4aee-9fbd-1180c6b8d145-tmp\") pod \"storage-provisioner\" (UID: \"ee986eb4-5a08-4aee-9fbd-1180c6b8d145\") " pod="kube-system/storage-provisioner"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871915    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b40f7eb3-69bd-4fed-b782-6ddec667356d-cni-cfg\") pod \"kindnet-zcptz\" (UID: \"b40f7eb3-69bd-4fed-b782-6ddec667356d\") " pod="kube-system/kindnet-zcptz"
	May 31 19:21:31 multinode-988083 kubelet[3102]: I0531 19:21:31.871938    3102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45d80625-5a0f-4e15-96fd-dbecc3b1dcc9-lib-modules\") pod \"kube-proxy-8cnnb\" (UID: \"45d80625-5a0f-4e15-96fd-dbecc3b1dcc9\") " pod="kube-system/kube-proxy-8cnnb"
	May 31 19:21:39 multinode-988083 kubelet[3102]: I0531 19:21:39.004855    3102 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 31 19:22:27 multinode-988083 kubelet[3102]: E0531 19:22:27.904717    3102 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 19:22:27 multinode-988083 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 19:22:27 multinode-988083 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 19:22:27 multinode-988083 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 19:22:27 multinode-988083 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 19:23:27 multinode-988083 kubelet[3102]: E0531 19:23:27.903004    3102 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 19:23:27 multinode-988083 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 19:23:27 multinode-988083 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 19:23:27 multinode-988083 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 19:23:27 multinode-988083 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 31 19:24:27 multinode-988083 kubelet[3102]: E0531 19:24:27.904925    3102 iptables.go:577] "Could not set up iptables canary" err=<
	May 31 19:24:27 multinode-988083 kubelet[3102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 31 19:24:27 multinode-988083 kubelet[3102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 31 19:24:27 multinode-988083 kubelet[3102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 31 19:24:27 multinode-988083 kubelet[3102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:25:14.670275   53374 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18995-11107/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-988083 -n multinode-988083
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-988083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.42s)

                                                
                                    
x
+
TestPreload (274.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-768262 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0531 19:29:58.916382   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-768262 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m11.250444765s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-768262 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-768262 image pull gcr.io/k8s-minikube/busybox: (2.757221182s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-768262
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-768262: exit status 82 (2m0.450984006s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-768262"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-768262 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-05-31 19:33:25.895748499 +0000 UTC m=+5880.419392973
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-768262 -n test-preload-768262
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-768262 -n test-preload-768262: exit status 3 (18.648993816s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:33:44.541253   56245 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host
	E0531 19:33:44.541276   56245 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.186:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-768262" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-768262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-768262
--- FAIL: TestPreload (274.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (430.43s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m55.68783869s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-343216] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-343216" primary control-plane node in "kubernetes-upgrade-343216" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:35:40.594712   57319 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:35:40.594826   57319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:35:40.594835   57319 out.go:304] Setting ErrFile to fd 2...
	I0531 19:35:40.594842   57319 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:35:40.595106   57319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:35:40.595729   57319 out.go:298] Setting JSON to false
	I0531 19:35:40.596911   57319 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8287,"bootTime":1717175854,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:35:40.596996   57319 start.go:139] virtualization: kvm guest
	I0531 19:35:40.600237   57319 out.go:177] * [kubernetes-upgrade-343216] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:35:40.601583   57319 notify.go:220] Checking for updates...
	I0531 19:35:40.601597   57319 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 19:35:40.602886   57319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:35:40.604587   57319 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 19:35:40.605813   57319 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:35:40.607277   57319 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:35:40.608765   57319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:35:40.610081   57319 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 19:35:40.644883   57319 out.go:177] * Using the kvm2 driver based on user configuration
	I0531 19:35:40.646203   57319 start.go:297] selected driver: kvm2
	I0531 19:35:40.646233   57319 start.go:901] validating driver "kvm2" against <nil>
	I0531 19:35:40.646254   57319 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:35:40.647181   57319 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:35:40.662414   57319 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 19:35:40.677527   57319 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 19:35:40.677584   57319 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0531 19:35:40.677856   57319 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 19:35:40.677885   57319 cni.go:84] Creating CNI manager for ""
	I0531 19:35:40.677897   57319 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 19:35:40.677920   57319 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0531 19:35:40.677978   57319 start.go:340] cluster config:
	{Name:kubernetes-upgrade-343216 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-343216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:35:40.678106   57319 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:35:40.679811   57319 out.go:177] * Starting "kubernetes-upgrade-343216" primary control-plane node in "kubernetes-upgrade-343216" cluster
	I0531 19:35:40.681011   57319 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0531 19:35:40.681049   57319 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0531 19:35:40.681062   57319 cache.go:56] Caching tarball of preloaded images
	I0531 19:35:40.681167   57319 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 19:35:40.681177   57319 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0531 19:35:40.681516   57319 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/config.json ...
	I0531 19:35:40.681539   57319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/config.json: {Name:mkdc3d6efea82718cabb3599e50d9d6eba05ae01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:35:40.681687   57319 start.go:360] acquireMachinesLock for kubernetes-upgrade-343216: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 19:36:05.698109   57319 start.go:364] duration metric: took 25.016366471s to acquireMachinesLock for "kubernetes-upgrade-343216"
	I0531 19:36:05.698180   57319 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-343216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-343216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:36:05.698351   57319 start.go:125] createHost starting for "" (driver="kvm2")
	I0531 19:36:05.701058   57319 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0531 19:36:05.701327   57319 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:36:05.701383   57319 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:36:05.718952   57319 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37905
	I0531 19:36:05.719444   57319 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:36:05.720015   57319 main.go:141] libmachine: Using API Version  1
	I0531 19:36:05.720043   57319 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:36:05.720418   57319 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:36:05.720611   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetMachineName
	I0531 19:36:05.720790   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:36:05.720946   57319 start.go:159] libmachine.API.Create for "kubernetes-upgrade-343216" (driver="kvm2")
	I0531 19:36:05.720988   57319 client.go:168] LocalClient.Create starting
	I0531 19:36:05.721029   57319 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 19:36:05.721068   57319 main.go:141] libmachine: Decoding PEM data...
	I0531 19:36:05.721090   57319 main.go:141] libmachine: Parsing certificate...
	I0531 19:36:05.721155   57319 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 19:36:05.721181   57319 main.go:141] libmachine: Decoding PEM data...
	I0531 19:36:05.721206   57319 main.go:141] libmachine: Parsing certificate...
	I0531 19:36:05.721225   57319 main.go:141] libmachine: Running pre-create checks...
	I0531 19:36:05.721243   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .PreCreateCheck
	I0531 19:36:05.721598   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetConfigRaw
	I0531 19:36:05.722018   57319 main.go:141] libmachine: Creating machine...
	I0531 19:36:05.722033   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .Create
	I0531 19:36:05.722162   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Creating KVM machine...
	I0531 19:36:05.723252   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found existing default KVM network
	I0531 19:36:05.724222   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:05.724065   57620 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:94:a3:38} reservation:<nil>}
	I0531 19:36:05.725077   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:05.724992   57620 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fa50}
	I0531 19:36:05.725124   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | created network xml: 
	I0531 19:36:05.725153   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | <network>
	I0531 19:36:05.725165   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |   <name>mk-kubernetes-upgrade-343216</name>
	I0531 19:36:05.725180   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |   <dns enable='no'/>
	I0531 19:36:05.725189   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |   
	I0531 19:36:05.725201   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0531 19:36:05.725221   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |     <dhcp>
	I0531 19:36:05.725231   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0531 19:36:05.725246   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |     </dhcp>
	I0531 19:36:05.725267   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |   </ip>
	I0531 19:36:05.725280   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG |   
	I0531 19:36:05.725290   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | </network>
	I0531 19:36:05.725326   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | 
	I0531 19:36:05.730844   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | trying to create private KVM network mk-kubernetes-upgrade-343216 192.168.50.0/24...
	I0531 19:36:05.801475   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | private KVM network mk-kubernetes-upgrade-343216 192.168.50.0/24 created
	I0531 19:36:05.801510   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:05.801445   57620 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:36:05.801534   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216 ...
	I0531 19:36:05.801556   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 19:36:05.801573   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 19:36:06.027366   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:06.027245   57620 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa...
	I0531 19:36:06.253873   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:06.253724   57620 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/kubernetes-upgrade-343216.rawdisk...
	I0531 19:36:06.253907   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Writing magic tar header
	I0531 19:36:06.253924   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Writing SSH key tar header
	I0531 19:36:06.253938   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:06.253843   57620 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216 ...
	I0531 19:36:06.254027   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216
	I0531 19:36:06.254072   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216 (perms=drwx------)
	I0531 19:36:06.254084   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 19:36:06.254101   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:36:06.254115   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 19:36:06.254131   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 19:36:06.254148   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 19:36:06.254158   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 19:36:06.254172   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 19:36:06.254182   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 19:36:06.254191   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 19:36:06.254199   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Creating domain...
	I0531 19:36:06.254212   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Checking permissions on dir: /home/jenkins
	I0531 19:36:06.254222   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Checking permissions on dir: /home
	I0531 19:36:06.254231   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Skipping /home - not owner
	I0531 19:36:06.255244   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) define libvirt domain using xml: 
	I0531 19:36:06.255267   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) <domain type='kvm'>
	I0531 19:36:06.255287   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   <name>kubernetes-upgrade-343216</name>
	I0531 19:36:06.255306   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   <memory unit='MiB'>2200</memory>
	I0531 19:36:06.255336   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   <vcpu>2</vcpu>
	I0531 19:36:06.255358   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   <features>
	I0531 19:36:06.255368   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <acpi/>
	I0531 19:36:06.255379   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <apic/>
	I0531 19:36:06.255391   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <pae/>
	I0531 19:36:06.255399   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     
	I0531 19:36:06.255410   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   </features>
	I0531 19:36:06.255421   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   <cpu mode='host-passthrough'>
	I0531 19:36:06.255433   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   
	I0531 19:36:06.255442   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   </cpu>
	I0531 19:36:06.255451   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   <os>
	I0531 19:36:06.255461   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <type>hvm</type>
	I0531 19:36:06.255474   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <boot dev='cdrom'/>
	I0531 19:36:06.255484   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <boot dev='hd'/>
	I0531 19:36:06.255496   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <bootmenu enable='no'/>
	I0531 19:36:06.255507   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   </os>
	I0531 19:36:06.255519   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   <devices>
	I0531 19:36:06.255531   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <disk type='file' device='cdrom'>
	I0531 19:36:06.255548   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/boot2docker.iso'/>
	I0531 19:36:06.255559   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <target dev='hdc' bus='scsi'/>
	I0531 19:36:06.255569   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <readonly/>
	I0531 19:36:06.255587   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     </disk>
	I0531 19:36:06.255611   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <disk type='file' device='disk'>
	I0531 19:36:06.255624   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 19:36:06.255642   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/kubernetes-upgrade-343216.rawdisk'/>
	I0531 19:36:06.255660   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <target dev='hda' bus='virtio'/>
	I0531 19:36:06.255671   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     </disk>
	I0531 19:36:06.255683   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <interface type='network'>
	I0531 19:36:06.255696   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <source network='mk-kubernetes-upgrade-343216'/>
	I0531 19:36:06.255707   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <model type='virtio'/>
	I0531 19:36:06.255715   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     </interface>
	I0531 19:36:06.255725   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <interface type='network'>
	I0531 19:36:06.255733   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <source network='default'/>
	I0531 19:36:06.255743   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <model type='virtio'/>
	I0531 19:36:06.255751   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     </interface>
	I0531 19:36:06.255761   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <serial type='pty'>
	I0531 19:36:06.255769   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <target port='0'/>
	I0531 19:36:06.255779   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     </serial>
	I0531 19:36:06.255792   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <console type='pty'>
	I0531 19:36:06.255803   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <target type='serial' port='0'/>
	I0531 19:36:06.255814   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     </console>
	I0531 19:36:06.255822   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     <rng model='virtio'>
	I0531 19:36:06.255834   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)       <backend model='random'>/dev/random</backend>
	I0531 19:36:06.255844   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     </rng>
	I0531 19:36:06.255856   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     
	I0531 19:36:06.255865   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)     
	I0531 19:36:06.255873   57319 main.go:141] libmachine: (kubernetes-upgrade-343216)   </devices>
	I0531 19:36:06.255883   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) </domain>
	I0531 19:36:06.255894   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) 
	I0531 19:36:06.260226   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:c2:08:26 in network default
	I0531 19:36:06.260811   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Ensuring networks are active...
	I0531 19:36:06.260837   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:06.261488   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Ensuring network default is active
	I0531 19:36:06.261750   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Ensuring network mk-kubernetes-upgrade-343216 is active
	I0531 19:36:06.262241   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Getting domain xml...
	I0531 19:36:06.262968   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Creating domain...
	I0531 19:36:07.542760   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Waiting to get IP...
	I0531 19:36:07.543654   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:07.544107   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:07.544136   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:07.544048   57620 retry.go:31] will retry after 203.729646ms: waiting for machine to come up
	I0531 19:36:07.749603   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:07.750008   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:07.750036   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:07.749954   57620 retry.go:31] will retry after 297.083074ms: waiting for machine to come up
	I0531 19:36:08.048324   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:08.048810   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:08.048835   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:08.048775   57620 retry.go:31] will retry after 294.530745ms: waiting for machine to come up
	I0531 19:36:08.345484   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:08.346026   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:08.346056   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:08.345912   57620 retry.go:31] will retry after 556.641656ms: waiting for machine to come up
	I0531 19:36:08.903812   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:08.904310   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:08.904344   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:08.904256   57620 retry.go:31] will retry after 462.184774ms: waiting for machine to come up
	I0531 19:36:09.367970   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:09.368445   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:09.368475   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:09.368400   57620 retry.go:31] will retry after 666.365949ms: waiting for machine to come up
	I0531 19:36:10.036488   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:10.037189   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:10.037228   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:10.037142   57620 retry.go:31] will retry after 1.187555289s: waiting for machine to come up
	I0531 19:36:11.226523   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:11.227008   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:11.227038   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:11.226955   57620 retry.go:31] will retry after 1.257704234s: waiting for machine to come up
	I0531 19:36:12.486127   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:12.486625   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:12.486651   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:12.486570   57620 retry.go:31] will retry after 1.423932476s: waiting for machine to come up
	I0531 19:36:13.912089   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:13.912561   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:13.912588   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:13.912509   57620 retry.go:31] will retry after 2.11891337s: waiting for machine to come up
	I0531 19:36:16.032895   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:16.033399   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:16.033430   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:16.033345   57620 retry.go:31] will retry after 2.7881302s: waiting for machine to come up
	I0531 19:36:18.822905   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:18.823334   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:18.823364   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:18.823263   57620 retry.go:31] will retry after 2.717480077s: waiting for machine to come up
	I0531 19:36:21.541945   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:21.542449   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:21.542479   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:21.542411   57620 retry.go:31] will retry after 2.743411133s: waiting for machine to come up
	I0531 19:36:24.287004   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:24.287432   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find current IP address of domain kubernetes-upgrade-343216 in network mk-kubernetes-upgrade-343216
	I0531 19:36:24.287455   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | I0531 19:36:24.287397   57620 retry.go:31] will retry after 4.226560383s: waiting for machine to come up
	I0531 19:36:28.515054   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.515489   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Found IP for machine: 192.168.50.31
	I0531 19:36:28.515518   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Reserving static IP address...
	I0531 19:36:28.515533   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has current primary IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.515888   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-343216", mac: "52:54:00:e1:d2:fc", ip: "192.168.50.31"} in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.585782   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Reserved static IP address: 192.168.50.31
	I0531 19:36:28.585813   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Waiting for SSH to be available...
	I0531 19:36:28.585822   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Getting to WaitForSSH function...
	I0531 19:36:28.588719   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.589193   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:28.589225   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.589305   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Using SSH client type: external
	I0531 19:36:28.589364   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa (-rw-------)
	I0531 19:36:28.589393   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 19:36:28.589411   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | About to run SSH command:
	I0531 19:36:28.589423   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | exit 0
	I0531 19:36:28.721053   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | SSH cmd err, output: <nil>: 
	I0531 19:36:28.721381   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) KVM machine creation complete!
	I0531 19:36:28.721715   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetConfigRaw
	I0531 19:36:28.722240   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:36:28.722444   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:36:28.722671   57319 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 19:36:28.722687   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetState
	I0531 19:36:28.723994   57319 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 19:36:28.724012   57319 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 19:36:28.724020   57319 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 19:36:28.724033   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:28.726207   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.726610   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:28.726638   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.726763   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:28.726944   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:28.727133   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:28.727280   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:28.727486   57319 main.go:141] libmachine: Using SSH client type: native
	I0531 19:36:28.727739   57319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:36:28.727756   57319 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 19:36:28.840320   57319 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:36:28.840345   57319 main.go:141] libmachine: Detecting the provisioner...
	I0531 19:36:28.840353   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:28.843154   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.843520   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:28.843551   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.843755   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:28.843944   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:28.844105   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:28.844261   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:28.844425   57319 main.go:141] libmachine: Using SSH client type: native
	I0531 19:36:28.844588   57319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:36:28.844600   57319 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 19:36:28.957637   57319 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 19:36:28.957718   57319 main.go:141] libmachine: found compatible host: buildroot
	I0531 19:36:28.957729   57319 main.go:141] libmachine: Provisioning with buildroot...
	I0531 19:36:28.957738   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetMachineName
	I0531 19:36:28.958004   57319 buildroot.go:166] provisioning hostname "kubernetes-upgrade-343216"
	I0531 19:36:28.958035   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetMachineName
	I0531 19:36:28.958237   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:28.960735   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.961161   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:28.961188   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:28.961306   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:28.961497   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:28.961646   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:28.961807   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:28.961986   57319 main.go:141] libmachine: Using SSH client type: native
	I0531 19:36:28.962156   57319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:36:28.962168   57319 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-343216 && echo "kubernetes-upgrade-343216" | sudo tee /etc/hostname
	I0531 19:36:29.087770   57319 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-343216
	
	I0531 19:36:29.087796   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:29.090989   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.091374   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:29.091422   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.091609   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:29.091795   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:29.091954   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:29.092152   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:29.092337   57319 main.go:141] libmachine: Using SSH client type: native
	I0531 19:36:29.092537   57319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:36:29.092553   57319 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-343216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-343216/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-343216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:36:29.222323   57319 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:36:29.222349   57319 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 19:36:29.222396   57319 buildroot.go:174] setting up certificates
	I0531 19:36:29.222405   57319 provision.go:84] configureAuth start
	I0531 19:36:29.222415   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetMachineName
	I0531 19:36:29.222753   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetIP
	I0531 19:36:29.225237   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.225555   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:29.225587   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.225789   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:29.228259   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.228572   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:29.228603   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.228716   57319 provision.go:143] copyHostCerts
	I0531 19:36:29.228795   57319 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 19:36:29.228808   57319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:36:29.228876   57319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 19:36:29.229047   57319 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 19:36:29.229058   57319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:36:29.229093   57319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 19:36:29.229178   57319 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 19:36:29.229188   57319 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:36:29.229232   57319 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 19:36:29.229311   57319 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-343216 san=[127.0.0.1 192.168.50.31 kubernetes-upgrade-343216 localhost minikube]
	I0531 19:36:29.442544   57319 provision.go:177] copyRemoteCerts
	I0531 19:36:29.442608   57319 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:36:29.442637   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:29.446019   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.446406   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:29.446436   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.446607   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:29.446800   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:29.446947   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:29.447170   57319 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa Username:docker}
	I0531 19:36:29.535289   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0531 19:36:29.562793   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 19:36:29.592864   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0531 19:36:29.616591   57319 provision.go:87] duration metric: took 394.176456ms to configureAuth
	I0531 19:36:29.616615   57319 buildroot.go:189] setting minikube options for container-runtime
	I0531 19:36:29.616760   57319 config.go:182] Loaded profile config "kubernetes-upgrade-343216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0531 19:36:29.616833   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:29.619395   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.619682   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:29.619715   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.619879   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:29.620080   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:29.620253   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:29.620371   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:29.620579   57319 main.go:141] libmachine: Using SSH client type: native
	I0531 19:36:29.620764   57319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:36:29.620781   57319 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:36:29.900377   57319 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:36:29.900401   57319 main.go:141] libmachine: Checking connection to Docker...
	I0531 19:36:29.900427   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetURL
	I0531 19:36:29.901827   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | Using libvirt version 6000000
	I0531 19:36:29.904203   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.904505   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:29.904528   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.904696   57319 main.go:141] libmachine: Docker is up and running!
	I0531 19:36:29.904711   57319 main.go:141] libmachine: Reticulating splines...
	I0531 19:36:29.904718   57319 client.go:171] duration metric: took 24.183720139s to LocalClient.Create
	I0531 19:36:29.904745   57319 start.go:167] duration metric: took 24.18379927s to libmachine.API.Create "kubernetes-upgrade-343216"
	I0531 19:36:29.904758   57319 start.go:293] postStartSetup for "kubernetes-upgrade-343216" (driver="kvm2")
	I0531 19:36:29.904773   57319 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:36:29.904803   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:36:29.905098   57319 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:36:29.905118   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:29.907191   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.907550   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:29.907581   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:29.907698   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:29.907913   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:29.908100   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:29.908272   57319 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa Username:docker}
	I0531 19:36:29.995683   57319 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:36:30.000201   57319 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 19:36:30.000226   57319 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 19:36:30.000291   57319 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 19:36:30.000381   57319 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 19:36:30.000474   57319 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:36:30.009819   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:36:30.034116   57319 start.go:296] duration metric: took 129.342883ms for postStartSetup
	I0531 19:36:30.034171   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetConfigRaw
	I0531 19:36:30.034804   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetIP
	I0531 19:36:30.037736   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.038125   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:30.038152   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.038406   57319 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/config.json ...
	I0531 19:36:30.038567   57319 start.go:128] duration metric: took 24.340202814s to createHost
	I0531 19:36:30.038586   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:30.040994   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.041289   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:30.041317   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.041461   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:30.041649   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:30.041836   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:30.041984   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:30.042208   57319 main.go:141] libmachine: Using SSH client type: native
	I0531 19:36:30.042480   57319 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:36:30.042500   57319 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0531 19:36:30.157858   57319 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717184190.135064109
	
	I0531 19:36:30.157888   57319 fix.go:216] guest clock: 1717184190.135064109
	I0531 19:36:30.157899   57319 fix.go:229] Guest: 2024-05-31 19:36:30.135064109 +0000 UTC Remote: 2024-05-31 19:36:30.038577761 +0000 UTC m=+49.482094178 (delta=96.486348ms)
	I0531 19:36:30.157928   57319 fix.go:200] guest clock delta is within tolerance: 96.486348ms
	I0531 19:36:30.157935   57319 start.go:83] releasing machines lock for "kubernetes-upgrade-343216", held for 24.459791766s
	I0531 19:36:30.157968   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:36:30.158263   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetIP
	I0531 19:36:30.161195   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.161578   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:30.161599   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.161820   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:36:30.162280   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:36:30.162465   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:36:30.162585   57319 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:36:30.162637   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:30.162676   57319 ssh_runner.go:195] Run: cat /version.json
	I0531 19:36:30.162700   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:36:30.165641   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.165835   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.166018   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:30.166047   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.166214   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:30.166217   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:30.166240   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:30.166402   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:36:30.166442   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:30.166605   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:36:30.166608   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:30.166808   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:36:30.166806   57319 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa Username:docker}
	I0531 19:36:30.166936   57319 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa Username:docker}
	I0531 19:36:30.280616   57319 ssh_runner.go:195] Run: systemctl --version
	I0531 19:36:30.287200   57319 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:36:30.445644   57319 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 19:36:30.451910   57319 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 19:36:30.451977   57319 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:36:30.468757   57319 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 19:36:30.468779   57319 start.go:494] detecting cgroup driver to use...
	I0531 19:36:30.468839   57319 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:36:30.485283   57319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:36:30.504483   57319 docker.go:217] disabling cri-docker service (if available) ...
	I0531 19:36:30.504541   57319 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:36:30.519100   57319 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:36:30.532365   57319 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:36:30.655967   57319 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:36:30.801063   57319 docker.go:233] disabling docker service ...
	I0531 19:36:30.801137   57319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:36:30.817035   57319 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:36:30.830465   57319 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:36:30.967605   57319 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:36:31.096512   57319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:36:31.114437   57319 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:36:31.137825   57319 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0531 19:36:31.137894   57319 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:36:31.152081   57319 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:36:31.152164   57319 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:36:31.166380   57319 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:36:31.180534   57319 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:36:31.194886   57319 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:36:31.208314   57319 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:36:31.221893   57319 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 19:36:31.221960   57319 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 19:36:31.237736   57319 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:36:31.249206   57319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:36:31.370720   57319 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:36:31.522807   57319 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:36:31.522869   57319 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:36:31.528062   57319 start.go:562] Will wait 60s for crictl version
	I0531 19:36:31.528119   57319 ssh_runner.go:195] Run: which crictl
	I0531 19:36:31.532319   57319 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:36:31.582677   57319 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 19:36:31.582769   57319 ssh_runner.go:195] Run: crio --version
	I0531 19:36:31.615330   57319 ssh_runner.go:195] Run: crio --version
	I0531 19:36:31.653069   57319 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0531 19:36:31.654215   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetIP
	I0531 19:36:31.657216   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:31.657726   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:36:20 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:36:31.657757   57319 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:36:31.658115   57319 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0531 19:36:31.662737   57319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:36:31.677008   57319 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-343216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-343216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 19:36:31.677125   57319 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0531 19:36:31.677221   57319 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:36:31.730494   57319 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0531 19:36:31.730575   57319 ssh_runner.go:195] Run: which lz4
	I0531 19:36:31.736927   57319 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0531 19:36:31.743458   57319 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0531 19:36:31.743484   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0531 19:36:33.509918   57319 crio.go:462] duration metric: took 1.773010149s to copy over tarball
	I0531 19:36:33.510016   57319 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0531 19:36:36.236600   57319 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.726548757s)
	I0531 19:36:36.236640   57319 crio.go:469] duration metric: took 2.726683561s to extract the tarball
	I0531 19:36:36.236650   57319 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0531 19:36:36.281978   57319 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:36:36.349658   57319 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0531 19:36:36.349688   57319 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0531 19:36:36.349770   57319 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:36:36.349808   57319 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0531 19:36:36.349834   57319 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:36:36.349808   57319 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:36:36.349863   57319 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:36:36.349838   57319 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0531 19:36:36.349918   57319 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0531 19:36:36.350132   57319 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:36:36.351909   57319 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0531 19:36:36.351913   57319 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:36:36.351944   57319 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:36:36.351909   57319 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:36:36.352001   57319 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0531 19:36:36.352301   57319 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:36:36.352324   57319 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:36:36.352348   57319 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0531 19:36:36.524699   57319 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:36:36.540217   57319 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:36:36.552375   57319 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0531 19:36:36.572590   57319 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0531 19:36:36.572640   57319 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:36:36.572691   57319 ssh_runner.go:195] Run: which crictl
	I0531 19:36:36.633962   57319 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:36:36.634681   57319 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:36:36.635008   57319 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0531 19:36:36.635055   57319 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:36:36.635096   57319 ssh_runner.go:195] Run: which crictl
	I0531 19:36:36.639567   57319 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0531 19:36:36.639606   57319 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0531 19:36:36.639639   57319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:36:36.639643   57319 ssh_runner.go:195] Run: which crictl
	I0531 19:36:36.643132   57319 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0531 19:36:36.657953   57319 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0531 19:36:36.745196   57319 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0531 19:36:36.745296   57319 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:36:36.745376   57319 ssh_runner.go:195] Run: which crictl
	I0531 19:36:36.808811   57319 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0531 19:36:36.808871   57319 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:36:36.808935   57319 ssh_runner.go:195] Run: which crictl
	I0531 19:36:36.809068   57319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:36:36.809165   57319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0531 19:36:36.809287   57319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0531 19:36:36.809342   57319 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0531 19:36:36.809380   57319 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0531 19:36:36.809430   57319 ssh_runner.go:195] Run: which crictl
	I0531 19:36:36.830392   57319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:36:36.830503   57319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:36:36.830632   57319 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0531 19:36:36.830663   57319 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0531 19:36:36.830696   57319 ssh_runner.go:195] Run: which crictl
	I0531 19:36:36.918699   57319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0531 19:36:36.918845   57319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0531 19:36:36.918898   57319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0531 19:36:36.947901   57319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0531 19:36:36.948005   57319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0531 19:36:36.948031   57319 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0531 19:36:36.996755   57319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0531 19:36:36.996851   57319 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0531 19:36:37.377184   57319 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:36:37.526198   57319 cache_images.go:92] duration metric: took 1.176491137s to LoadCachedImages
	W0531 19:36:37.526290   57319 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0531 19:36:37.526307   57319 kubeadm.go:928] updating node { 192.168.50.31 8443 v1.20.0 crio true true} ...
	I0531 19:36:37.526456   57319 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-343216 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-343216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 19:36:37.526552   57319 ssh_runner.go:195] Run: crio config
	I0531 19:36:37.593598   57319 cni.go:84] Creating CNI manager for ""
	I0531 19:36:37.593621   57319 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 19:36:37.593633   57319 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 19:36:37.593660   57319 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.31 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-343216 NodeName:kubernetes-upgrade-343216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0531 19:36:37.593820   57319 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-343216"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:36:37.593889   57319 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0531 19:36:37.604514   57319 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:36:37.604577   57319 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:36:37.614370   57319 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I0531 19:36:37.632120   57319 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:36:37.650413   57319 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0531 19:36:37.670355   57319 ssh_runner.go:195] Run: grep 192.168.50.31	control-plane.minikube.internal$ /etc/hosts
	I0531 19:36:37.674879   57319 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.31	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:36:37.689201   57319 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:36:37.811220   57319 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 19:36:37.833731   57319 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216 for IP: 192.168.50.31
	I0531 19:36:37.833754   57319 certs.go:194] generating shared ca certs ...
	I0531 19:36:37.833771   57319 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:36:37.833966   57319 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 19:36:37.834025   57319 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 19:36:37.834037   57319 certs.go:256] generating profile certs ...
	I0531 19:36:37.834107   57319 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/client.key
	I0531 19:36:37.834132   57319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/client.crt with IP's: []
	I0531 19:36:38.004677   57319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/client.crt ...
	I0531 19:36:38.004704   57319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/client.crt: {Name:mk26eed55534ef7269a7efceaff86438ec3e1da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:36:38.004910   57319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/client.key ...
	I0531 19:36:38.004931   57319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/client.key: {Name:mk75a6a573cc56fc97b900bf2967d370ac23edbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:36:38.005080   57319 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.key.d10b9b8a
	I0531 19:36:38.005099   57319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.crt.d10b9b8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.31]
	I0531 19:36:38.318931   57319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.crt.d10b9b8a ...
	I0531 19:36:38.318958   57319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.crt.d10b9b8a: {Name:mk109a5e13c3b387c15be4201f2b36ac17a15f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:36:38.319113   57319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.key.d10b9b8a ...
	I0531 19:36:38.319127   57319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.key.d10b9b8a: {Name:mk9b2dd219eb6598c355f4788abd6de41eddfc90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:36:38.319202   57319 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.crt.d10b9b8a -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.crt
	I0531 19:36:38.319291   57319 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.key.d10b9b8a -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.key
	I0531 19:36:38.319347   57319 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.key
	I0531 19:36:38.319364   57319 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.crt with IP's: []
	I0531 19:36:38.548591   57319 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.crt ...
	I0531 19:36:38.548617   57319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.crt: {Name:mk4609859a995d779029d275828c4643c8bc7a49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:36:38.548787   57319 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.key ...
	I0531 19:36:38.548808   57319 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.key: {Name:mk81bf2c8d05d9927cf087a7c9db4e93405ba52b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:36:38.549054   57319 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 19:36:38.549103   57319 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 19:36:38.549118   57319 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:36:38.549148   57319 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 19:36:38.549179   57319 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:36:38.549218   57319 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 19:36:38.549276   57319 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:36:38.549840   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:36:38.581457   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:36:38.609314   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:36:38.638706   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:36:38.689395   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0531 19:36:38.714953   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 19:36:38.740438   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:36:38.776986   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:36:38.802510   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 19:36:38.831224   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:36:38.857952   57319 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 19:36:38.884394   57319 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:36:38.904225   57319 ssh_runner.go:195] Run: openssl version
	I0531 19:36:38.910331   57319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:36:38.922517   57319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:36:38.927465   57319 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:36:38.927519   57319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:36:38.933332   57319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:36:38.944894   57319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 19:36:38.958476   57319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 19:36:38.963466   57319 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:36:38.963523   57319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 19:36:38.969607   57319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 19:36:38.981886   57319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 19:36:38.994392   57319 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 19:36:38.999085   57319 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:36:38.999136   57319 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 19:36:39.005025   57319 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:36:39.017109   57319 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:36:39.021407   57319 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 19:36:39.021467   57319 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-343216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-343216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:36:39.021559   57319 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:36:39.021610   57319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:36:39.062352   57319 cri.go:89] found id: ""
	I0531 19:36:39.062433   57319 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:36:39.073707   57319 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:36:39.087962   57319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:36:39.098819   57319 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:36:39.098839   57319 kubeadm.go:156] found existing configuration files:
	
	I0531 19:36:39.098878   57319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 19:36:39.108819   57319 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0531 19:36:39.108884   57319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0531 19:36:39.119686   57319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 19:36:39.129951   57319 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0531 19:36:39.130041   57319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0531 19:36:39.141137   57319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 19:36:39.151388   57319 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0531 19:36:39.151453   57319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 19:36:39.162191   57319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 19:36:39.172893   57319 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0531 19:36:39.172972   57319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 19:36:39.184199   57319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0531 19:36:39.314636   57319 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0531 19:36:39.314742   57319 kubeadm.go:309] [preflight] Running pre-flight checks
	I0531 19:36:39.527028   57319 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 19:36:39.527197   57319 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 19:36:39.527371   57319 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 19:36:39.736203   57319 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 19:36:39.788364   57319 out.go:204]   - Generating certificates and keys ...
	I0531 19:36:39.788491   57319 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0531 19:36:39.788572   57319 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0531 19:36:39.808442   57319 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 19:36:40.196475   57319 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0531 19:36:40.416820   57319 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0531 19:36:40.500808   57319 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0531 19:36:40.578084   57319 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0531 19:36:40.578316   57319 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-343216 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	I0531 19:36:40.706346   57319 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0531 19:36:40.706549   57319 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-343216 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	I0531 19:36:40.892585   57319 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 19:36:40.971402   57319 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 19:36:41.110959   57319 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0531 19:36:41.111229   57319 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 19:36:41.307295   57319 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 19:36:41.504329   57319 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 19:36:41.972048   57319 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 19:36:42.320988   57319 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 19:36:42.336580   57319 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:36:42.337670   57319 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:36:42.337758   57319 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0531 19:36:42.485021   57319 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 19:36:42.486777   57319 out.go:204]   - Booting up control plane ...
	I0531 19:36:42.486901   57319 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 19:36:42.492407   57319 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 19:36:42.493390   57319 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 19:36:42.494293   57319 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 19:36:42.500882   57319 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 19:37:22.494831   57319 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0531 19:37:22.495066   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:37:22.495427   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:37:27.495952   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:37:27.496242   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:37:37.495368   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:37:37.495652   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:37:57.495522   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:37:57.495862   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:38:37.497531   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:38:37.497811   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:38:37.497828   57319 kubeadm.go:309] 
	I0531 19:38:37.497877   57319 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0531 19:38:37.497965   57319 kubeadm.go:309] 		timed out waiting for the condition
	I0531 19:38:37.497993   57319 kubeadm.go:309] 
	I0531 19:38:37.498045   57319 kubeadm.go:309] 	This error is likely caused by:
	I0531 19:38:37.498092   57319 kubeadm.go:309] 		- The kubelet is not running
	I0531 19:38:37.498249   57319 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0531 19:38:37.498260   57319 kubeadm.go:309] 
	I0531 19:38:37.498401   57319 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0531 19:38:37.498465   57319 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0531 19:38:37.498521   57319 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0531 19:38:37.498532   57319 kubeadm.go:309] 
	I0531 19:38:37.498646   57319 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0531 19:38:37.498753   57319 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0531 19:38:37.498764   57319 kubeadm.go:309] 
	I0531 19:38:37.498884   57319 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0531 19:38:37.498995   57319 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0531 19:38:37.499097   57319 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0531 19:38:37.499198   57319 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0531 19:38:37.499207   57319 kubeadm.go:309] 
	I0531 19:38:37.499626   57319 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 19:38:37.499726   57319 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0531 19:38:37.499905   57319 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0531 19:38:37.499981   57319 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-343216 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-343216 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-343216 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-343216 localhost] and IPs [192.168.50.31 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0531 19:38:37.500044   57319 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0531 19:38:39.130428   57319 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.630358594s)
	I0531 19:38:39.130501   57319 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:38:39.152135   57319 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:38:39.166316   57319 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:38:39.166344   57319 kubeadm.go:156] found existing configuration files:
	
	I0531 19:38:39.166393   57319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 19:38:39.179158   57319 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0531 19:38:39.179217   57319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0531 19:38:39.193806   57319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 19:38:39.205712   57319 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0531 19:38:39.205773   57319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0531 19:38:39.218795   57319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 19:38:39.231685   57319 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0531 19:38:39.231745   57319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 19:38:39.244008   57319 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 19:38:39.257026   57319 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0531 19:38:39.257092   57319 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 19:38:39.270654   57319 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0531 19:38:39.527923   57319 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0531 19:40:35.541244   57319 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0531 19:40:35.541362   57319 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0531 19:40:35.542994   57319 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0531 19:40:35.543075   57319 kubeadm.go:309] [preflight] Running pre-flight checks
	I0531 19:40:35.543152   57319 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 19:40:35.543268   57319 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 19:40:35.543388   57319 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 19:40:35.543438   57319 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 19:40:35.545029   57319 out.go:204]   - Generating certificates and keys ...
	I0531 19:40:35.545114   57319 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0531 19:40:35.545197   57319 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0531 19:40:35.545312   57319 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0531 19:40:35.545382   57319 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0531 19:40:35.545437   57319 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0531 19:40:35.545522   57319 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0531 19:40:35.545589   57319 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0531 19:40:35.545658   57319 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0531 19:40:35.545740   57319 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0531 19:40:35.545829   57319 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0531 19:40:35.545888   57319 kubeadm.go:309] [certs] Using the existing "sa" key
	I0531 19:40:35.545974   57319 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 19:40:35.546018   57319 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 19:40:35.546064   57319 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 19:40:35.546125   57319 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 19:40:35.546176   57319 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 19:40:35.546349   57319 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:40:35.546480   57319 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:40:35.546540   57319 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0531 19:40:35.546636   57319 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 19:40:35.548723   57319 out.go:204]   - Booting up control plane ...
	I0531 19:40:35.548840   57319 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 19:40:35.548946   57319 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 19:40:35.549045   57319 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 19:40:35.549153   57319 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 19:40:35.549350   57319 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 19:40:35.549403   57319 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0531 19:40:35.549500   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:40:35.549661   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:40:35.549721   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:40:35.549881   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:40:35.549973   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:40:35.550156   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:40:35.550250   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:40:35.550476   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:40:35.550533   57319 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0531 19:40:35.550755   57319 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0531 19:40:35.550770   57319 kubeadm.go:309] 
	I0531 19:40:35.550822   57319 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0531 19:40:35.550860   57319 kubeadm.go:309] 		timed out waiting for the condition
	I0531 19:40:35.550867   57319 kubeadm.go:309] 
	I0531 19:40:35.550916   57319 kubeadm.go:309] 	This error is likely caused by:
	I0531 19:40:35.550963   57319 kubeadm.go:309] 		- The kubelet is not running
	I0531 19:40:35.551114   57319 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0531 19:40:35.551125   57319 kubeadm.go:309] 
	I0531 19:40:35.551267   57319 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0531 19:40:35.551318   57319 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0531 19:40:35.551364   57319 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0531 19:40:35.551375   57319 kubeadm.go:309] 
	I0531 19:40:35.551532   57319 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0531 19:40:35.551654   57319 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0531 19:40:35.551664   57319 kubeadm.go:309] 
	I0531 19:40:35.551781   57319 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0531 19:40:35.551881   57319 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0531 19:40:35.552003   57319 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0531 19:40:35.552111   57319 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0531 19:40:35.552135   57319 kubeadm.go:309] 
	I0531 19:40:35.552188   57319 kubeadm.go:393] duration metric: took 3m56.530724792s to StartCluster
	I0531 19:40:35.552244   57319 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0531 19:40:35.552303   57319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0531 19:40:35.594056   57319 cri.go:89] found id: ""
	I0531 19:40:35.594086   57319 logs.go:276] 0 containers: []
	W0531 19:40:35.594096   57319 logs.go:278] No container was found matching "kube-apiserver"
	I0531 19:40:35.594105   57319 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0531 19:40:35.594163   57319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0531 19:40:35.636414   57319 cri.go:89] found id: ""
	I0531 19:40:35.636450   57319 logs.go:276] 0 containers: []
	W0531 19:40:35.636464   57319 logs.go:278] No container was found matching "etcd"
	I0531 19:40:35.636474   57319 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0531 19:40:35.636546   57319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0531 19:40:35.685037   57319 cri.go:89] found id: ""
	I0531 19:40:35.685069   57319 logs.go:276] 0 containers: []
	W0531 19:40:35.685082   57319 logs.go:278] No container was found matching "coredns"
	I0531 19:40:35.685090   57319 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0531 19:40:35.685153   57319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0531 19:40:35.731320   57319 cri.go:89] found id: ""
	I0531 19:40:35.731351   57319 logs.go:276] 0 containers: []
	W0531 19:40:35.731363   57319 logs.go:278] No container was found matching "kube-scheduler"
	I0531 19:40:35.731370   57319 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0531 19:40:35.731439   57319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0531 19:40:35.776723   57319 cri.go:89] found id: ""
	I0531 19:40:35.776747   57319 logs.go:276] 0 containers: []
	W0531 19:40:35.776756   57319 logs.go:278] No container was found matching "kube-proxy"
	I0531 19:40:35.776764   57319 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0531 19:40:35.776831   57319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0531 19:40:35.814055   57319 cri.go:89] found id: ""
	I0531 19:40:35.814089   57319 logs.go:276] 0 containers: []
	W0531 19:40:35.814099   57319 logs.go:278] No container was found matching "kube-controller-manager"
	I0531 19:40:35.814107   57319 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0531 19:40:35.814170   57319 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0531 19:40:35.850806   57319 cri.go:89] found id: ""
	I0531 19:40:35.850834   57319 logs.go:276] 0 containers: []
	W0531 19:40:35.850844   57319 logs.go:278] No container was found matching "kindnet"
	I0531 19:40:35.850855   57319 logs.go:123] Gathering logs for dmesg ...
	I0531 19:40:35.850870   57319 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0531 19:40:35.869528   57319 logs.go:123] Gathering logs for describe nodes ...
	I0531 19:40:35.869575   57319 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0531 19:40:36.007526   57319 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0531 19:40:36.007553   57319 logs.go:123] Gathering logs for CRI-O ...
	I0531 19:40:36.007567   57319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0531 19:40:36.125186   57319 logs.go:123] Gathering logs for container status ...
	I0531 19:40:36.125229   57319 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0531 19:40:36.168142   57319 logs.go:123] Gathering logs for kubelet ...
	I0531 19:40:36.168171   57319 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0531 19:40:36.227981   57319 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0531 19:40:36.228025   57319 out.go:239] * 
	* 
	W0531 19:40:36.228074   57319 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 19:40:36.228091   57319 out.go:239] * 
	* 
	W0531 19:40:36.228902   57319 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0531 19:40:36.231661   57319 out.go:177] 
	W0531 19:40:36.232823   57319 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0531 19:40:36.232881   57319 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0531 19:40:36.232910   57319 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0531 19:40:36.234218   57319 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-343216
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-343216: (1.884186946s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-343216 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-343216 status --format={{.Host}}: exit status 7 (89.86425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.016254853s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-343216 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (100.978537ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-343216] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-343216
	    minikube start -p kubernetes-upgrade-343216 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3432162 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-343216 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-343216 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (49.08907994s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-05-31 19:42:47.534410019 +0000 UTC m=+6442.058054500
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-343216 -n kubernetes-upgrade-343216
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-343216 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-343216 logs -n 25: (1.698087772s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-585293 sudo crio            | cilium-585293             | jenkins | v1.33.1 | 31 May 24 19:39 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-585293                      | cilium-585293             | jenkins | v1.33.1 | 31 May 24 19:39 UTC | 31 May 24 19:39 UTC |
	| start   | -p force-systemd-env-726812           | force-systemd-env-726812  | jenkins | v1.33.1 | 31 May 24 19:39 UTC | 31 May 24 19:40 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-448728                | NoKubernetes-448728       | jenkins | v1.33.1 | 31 May 24 19:39 UTC | 31 May 24 19:40 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-285796 ssh cat     | force-systemd-flag-285796 | jenkins | v1.33.1 | 31 May 24 19:39 UTC | 31 May 24 19:39 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-285796          | force-systemd-flag-285796 | jenkins | v1.33.1 | 31 May 24 19:39 UTC | 31 May 24 19:39 UTC |
	| start   | -p cert-expiration-149181             | cert-expiration-149181    | jenkins | v1.33.1 | 31 May 24 19:39 UTC | 31 May 24 19:40 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-448728                | NoKubernetes-448728       | jenkins | v1.33.1 | 31 May 24 19:40 UTC | 31 May 24 19:40 UTC |
	| start   | -p NoKubernetes-448728                | NoKubernetes-448728       | jenkins | v1.33.1 | 31 May 24 19:40 UTC | 31 May 24 19:40 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-726812           | force-systemd-env-726812  | jenkins | v1.33.1 | 31 May 24 19:40 UTC | 31 May 24 19:40 UTC |
	| start   | -p cert-options-396055                | cert-options-396055       | jenkins | v1.33.1 | 31 May 24 19:40 UTC | 31 May 24 19:41 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-343216          | kubernetes-upgrade-343216 | jenkins | v1.33.1 | 31 May 24 19:40 UTC | 31 May 24 19:40 UTC |
	| start   | -p kubernetes-upgrade-343216          | kubernetes-upgrade-343216 | jenkins | v1.33.1 | 31 May 24 19:40 UTC | 31 May 24 19:41 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-448728 sudo           | NoKubernetes-448728       | jenkins | v1.33.1 | 31 May 24 19:40 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-448728                | NoKubernetes-448728       | jenkins | v1.33.1 | 31 May 24 19:40 UTC | 31 May 24 19:40 UTC |
	| start   | -p NoKubernetes-448728                | NoKubernetes-448728       | jenkins | v1.33.1 | 31 May 24 19:40 UTC | 31 May 24 19:42 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-396055 ssh               | cert-options-396055       | jenkins | v1.33.1 | 31 May 24 19:41 UTC | 31 May 24 19:41 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-396055 -- sudo        | cert-options-396055       | jenkins | v1.33.1 | 31 May 24 19:41 UTC | 31 May 24 19:41 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-396055                | cert-options-396055       | jenkins | v1.33.1 | 31 May 24 19:41 UTC | 31 May 24 19:41 UTC |
	| start   | -p old-k8s-version-422430             | old-k8s-version-422430    | jenkins | v1.33.1 | 31 May 24 19:41 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-343216          | kubernetes-upgrade-343216 | jenkins | v1.33.1 | 31 May 24 19:41 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-343216          | kubernetes-upgrade-343216 | jenkins | v1.33.1 | 31 May 24 19:41 UTC | 31 May 24 19:42 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-448728 sudo           | NoKubernetes-448728       | jenkins | v1.33.1 | 31 May 24 19:42 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-448728                | NoKubernetes-448728       | jenkins | v1.33.1 | 31 May 24 19:42 UTC | 31 May 24 19:42 UTC |
	| start   | -p no-preload-696038                  | no-preload-696038         | jenkins | v1.33.1 | 31 May 24 19:42 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --preload=false --driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1          |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 19:42:01
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 19:42:01.362386   65227 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:42:01.362857   65227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:42:01.362873   65227 out.go:304] Setting ErrFile to fd 2...
	I0531 19:42:01.362880   65227 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:42:01.363304   65227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:42:01.364433   65227 out.go:298] Setting JSON to false
	I0531 19:42:01.365340   65227 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":8667,"bootTime":1717175854,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 19:42:01.365408   65227 start.go:139] virtualization: kvm guest
	I0531 19:42:01.367273   65227 out.go:177] * [no-preload-696038] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 19:42:01.368886   65227 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 19:42:01.368839   65227 notify.go:220] Checking for updates...
	I0531 19:42:01.370284   65227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 19:42:01.371741   65227 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 19:42:01.373064   65227 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:42:01.374332   65227 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 19:42:01.375584   65227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 19:42:01.377376   65227 config.go:182] Loaded profile config "cert-expiration-149181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:42:01.377526   65227 config.go:182] Loaded profile config "kubernetes-upgrade-343216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:42:01.377664   65227 config.go:182] Loaded profile config "old-k8s-version-422430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0531 19:42:01.377764   65227 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 19:42:01.416575   65227 out.go:177] * Using the kvm2 driver based on user configuration
	I0531 19:42:01.417788   65227 start.go:297] selected driver: kvm2
	I0531 19:42:01.417813   65227 start.go:901] validating driver "kvm2" against <nil>
	I0531 19:42:01.417829   65227 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 19:42:01.418702   65227 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.418763   65227 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 19:42:01.435335   65227 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 19:42:01.435389   65227 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0531 19:42:01.435594   65227 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0531 19:42:01.435662   65227 cni.go:84] Creating CNI manager for ""
	I0531 19:42:01.435673   65227 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 19:42:01.435684   65227 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0531 19:42:01.435748   65227 start.go:340] cluster config:
	{Name:no-preload-696038 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:no-preload-696038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:42:01.435854   65227 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.437583   65227 out.go:177] * Starting "no-preload-696038" primary control-plane node in "no-preload-696038" cluster
	I0531 19:41:58.605292   65042 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:41:58.605329   65042 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 19:41:58.605336   65042 cache.go:56] Caching tarball of preloaded images
	I0531 19:41:58.605406   65042 preload.go:173] Found /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0531 19:41:58.605416   65042 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on crio
	I0531 19:41:58.605495   65042 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/config.json ...
	I0531 19:41:58.605670   65042 start.go:360] acquireMachinesLock for kubernetes-upgrade-343216: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 19:42:00.133115   64814 main.go:141] libmachine: (old-k8s-version-422430) Waiting to get IP...
	I0531 19:42:00.133955   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:00.134410   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:00.134449   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:00.134397   64984 retry.go:31] will retry after 300.625278ms: waiting for machine to come up
	I0531 19:42:00.437000   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:00.437493   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:00.437550   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:00.437461   64984 retry.go:31] will retry after 372.386142ms: waiting for machine to come up
	I0531 19:42:00.811054   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:00.885938   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:00.885978   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:00.885881   64984 retry.go:31] will retry after 447.040389ms: waiting for machine to come up
	I0531 19:42:01.333971   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:01.334489   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:01.334516   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:01.334454   64984 retry.go:31] will retry after 570.11409ms: waiting for machine to come up
	I0531 19:42:01.905899   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:01.906433   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:01.906454   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:01.906403   64984 retry.go:31] will retry after 491.169715ms: waiting for machine to come up
	I0531 19:42:02.398724   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:02.399221   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:02.399249   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:02.399177   64984 retry.go:31] will retry after 724.499559ms: waiting for machine to come up
	I0531 19:42:03.125423   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:03.126070   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:03.126096   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:03.125998   64984 retry.go:31] will retry after 1.185385476s: waiting for machine to come up
	I0531 19:42:04.313114   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:04.313608   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:04.313637   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:04.313568   64984 retry.go:31] will retry after 1.05784343s: waiting for machine to come up
	I0531 19:42:01.438862   65227 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:42:01.439015   65227 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/no-preload-696038/config.json ...
	I0531 19:42:01.439050   65227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/no-preload-696038/config.json: {Name:mk5747a2776c2ef8f7933edf55a20095c831a77c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:01.439119   65227 cache.go:107] acquiring lock: {Name:mk42c1ed13c1939b243199034b9e995c8a14c2d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.439193   65227 cache.go:107] acquiring lock: {Name:mk6a5531201fe3f655fc31079c7fcd0e09cb38f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.439247   65227 start.go:360] acquireMachinesLock for no-preload-696038: {Name:mkd4b3e648e945fa0244c09a78f306ed5e669796 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0531 19:42:01.439241   65227 cache.go:107] acquiring lock: {Name:mk7098ea16eca6187f08c698ab3699f0c653ccd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.439241   65227 cache.go:107] acquiring lock: {Name:mk80c0d781290c17f164059488341312f5886267 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.439289   65227 cache.go:107] acquiring lock: {Name:mk5f195a1db3b9b9e7e83bf17f0317e9b7e6af44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.439271   65227 cache.go:115] /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0531 19:42:01.439344   65227 cache.go:107] acquiring lock: {Name:mk39d2affd2bee0013b2b3742b47e92b342a1e17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.439361   65227 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 248.879µs
	I0531 19:42:01.439364   65227 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.1
	I0531 19:42:01.439379   65227 cache.go:107] acquiring lock: {Name:mk21b0d6c9e36b331546c81cbeabe61fa479d078 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.439429   65227 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.1
	I0531 19:42:01.439447   65227 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0531 19:42:01.439483   65227 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0531 19:42:01.439484   65227 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.1
	I0531 19:42:01.439377   65227 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0531 19:42:01.439343   65227 cache.go:107] acquiring lock: {Name:mk72adaa63da3e28b3201284e2ff4070323810c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 19:42:01.439646   65227 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0531 19:42:01.439711   65227 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0531 19:42:01.441018   65227 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0531 19:42:01.441033   65227 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.1
	I0531 19:42:01.441056   65227 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.1
	I0531 19:42:01.441059   65227 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0531 19:42:01.441097   65227 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.1
	I0531 19:42:01.441249   65227 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.1
	I0531 19:42:01.441270   65227 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0531 19:42:01.592979   65227 cache.go:162] opening:  /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1
	I0531 19:42:01.635909   65227 cache.go:162] opening:  /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0531 19:42:01.658581   65227 cache.go:162] opening:  /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0
	I0531 19:42:01.684375   65227 cache.go:162] opening:  /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1
	I0531 19:42:01.708043   65227 cache.go:162] opening:  /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1
	I0531 19:42:01.713289   65227 cache.go:157] /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0531 19:42:01.713311   65227 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 274.026263ms
	I0531 19:42:01.713322   65227 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0531 19:42:01.733302   65227 cache.go:162] opening:  /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0531 19:42:01.765731   65227 cache.go:162] opening:  /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1
	I0531 19:42:02.075505   65227 cache.go:157] /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 exists
	I0531 19:42:02.075534   65227 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.1" -> "/home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1" took 636.350373ms
	I0531 19:42:02.075544   65227 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.1 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.30.1 succeeded
	I0531 19:42:03.135312   65227 cache.go:157] /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0531 19:42:03.135350   65227 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.695971271s
	I0531 19:42:03.135365   65227 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0531 19:42:03.414234   65227 cache.go:157] /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 exists
	I0531 19:42:03.414258   65227 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.1" -> "/home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1" took 1.975071797s
	I0531 19:42:03.414269   65227 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.1 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.30.1 succeeded
	I0531 19:42:03.643847   65227 cache.go:157] /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 exists
	I0531 19:42:03.643881   65227 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.1" -> "/home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1" took 2.204698199s
	I0531 19:42:03.643908   65227 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.1 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.30.1 succeeded
	I0531 19:42:03.668393   65227 cache.go:157] /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 exists
	I0531 19:42:03.668416   65227 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.1" -> "/home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1" took 2.229082125s
	I0531 19:42:03.668428   65227 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.1 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.30.1 succeeded
	I0531 19:42:03.691460   65227 cache.go:157] /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 exists
	I0531 19:42:03.691487   65227 cache.go:96] cache image "registry.k8s.io/etcd:3.5.12-0" -> "/home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0" took 2.252145904s
	I0531 19:42:03.691502   65227 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.12-0 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.12-0 succeeded
	I0531 19:42:03.691520   65227 cache.go:87] Successfully saved all images to host disk.
	I0531 19:42:05.372738   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:05.373304   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:05.373328   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:05.373246   64984 retry.go:31] will retry after 1.802067261s: waiting for machine to come up
	I0531 19:42:07.178041   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:07.178617   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:07.178640   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:07.178553   64984 retry.go:31] will retry after 2.263388228s: waiting for machine to come up
	I0531 19:42:09.443837   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:09.444308   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:09.444336   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:09.444260   64984 retry.go:31] will retry after 2.52247406s: waiting for machine to come up
	I0531 19:42:11.969604   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:11.970039   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:11.970065   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:11.969999   64984 retry.go:31] will retry after 2.330280444s: waiting for machine to come up
	I0531 19:42:14.302027   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:14.302528   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:14.302558   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:14.302473   64984 retry.go:31] will retry after 3.059608827s: waiting for machine to come up
	I0531 19:42:17.364031   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:17.364472   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find current IP address of domain old-k8s-version-422430 in network mk-old-k8s-version-422430
	I0531 19:42:17.364498   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | I0531 19:42:17.364438   64984 retry.go:31] will retry after 3.899505926s: waiting for machine to come up
	I0531 19:42:22.969733   65042 start.go:364] duration metric: took 24.364037718s to acquireMachinesLock for "kubernetes-upgrade-343216"
	I0531 19:42:22.969782   65042 start.go:96] Skipping create...Using existing machine configuration
	I0531 19:42:22.969788   65042 fix.go:54] fixHost starting: 
	I0531 19:42:22.970161   65042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:42:22.970209   65042 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:42:22.987414   65042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39403
	I0531 19:42:22.987888   65042 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:42:22.988323   65042 main.go:141] libmachine: Using API Version  1
	I0531 19:42:22.988362   65042 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:42:22.988708   65042 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:42:22.988892   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:42:22.989068   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetState
	I0531 19:42:22.990519   65042 fix.go:112] recreateIfNeeded on kubernetes-upgrade-343216: state=Running err=<nil>
	W0531 19:42:22.990541   65042 fix.go:138] unexpected machine state, will restart: <nil>
	I0531 19:42:22.992399   65042 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-343216" VM ...
	I0531 19:42:22.993522   65042 machine.go:94] provisionDockerMachine start ...
	I0531 19:42:22.993539   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:42:22.993723   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:22.996126   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:22.996591   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:22.996629   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:22.996751   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:22.996936   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:22.997135   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:22.997360   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:22.997529   65042 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:22.997750   65042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:42:22.997764   65042 main.go:141] libmachine: About to run SSH command:
	hostname
	I0531 19:42:23.106377   65042 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-343216
	
	I0531 19:42:23.106403   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetMachineName
	I0531 19:42:23.106652   65042 buildroot.go:166] provisioning hostname "kubernetes-upgrade-343216"
	I0531 19:42:23.106684   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetMachineName
	I0531 19:42:23.106878   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:23.109761   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.110094   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:23.110124   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.110322   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:23.110512   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:23.110677   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:23.110815   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:23.111004   65042 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:23.111284   65042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:42:23.111304   65042 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-343216 && echo "kubernetes-upgrade-343216" | sudo tee /etc/hostname
	I0531 19:42:23.236469   65042 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-343216
	
	I0531 19:42:23.236495   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:23.239434   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.239800   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:23.239840   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.240024   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:23.240219   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:23.240376   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:23.240515   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:23.240727   65042 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:23.240926   65042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:42:23.240951   65042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-343216' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-343216/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-343216' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:42:23.350057   65042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:42:23.350085   65042 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 19:42:23.350123   65042 buildroot.go:174] setting up certificates
	I0531 19:42:23.350149   65042 provision.go:84] configureAuth start
	I0531 19:42:23.350169   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetMachineName
	I0531 19:42:23.350440   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetIP
	I0531 19:42:23.353395   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.353720   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:23.353751   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.353902   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:23.356392   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.356769   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:23.356800   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.356921   65042 provision.go:143] copyHostCerts
	I0531 19:42:23.357052   65042 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 19:42:23.357069   65042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:42:23.357133   65042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 19:42:23.357249   65042 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 19:42:23.357259   65042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:42:23.357292   65042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 19:42:23.357387   65042 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 19:42:23.357398   65042 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:42:23.357432   65042 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 19:42:23.357515   65042 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-343216 san=[127.0.0.1 192.168.50.31 kubernetes-upgrade-343216 localhost minikube]
	I0531 19:42:21.266490   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.266994   64814 main.go:141] libmachine: (old-k8s-version-422430) Found IP for machine: 192.168.72.132
	I0531 19:42:21.267015   64814 main.go:141] libmachine: (old-k8s-version-422430) Reserving static IP address...
	I0531 19:42:21.267054   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has current primary IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.267364   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-422430", mac: "52:54:00:da:d3:55", ip: "192.168.72.132"} in network mk-old-k8s-version-422430
	I0531 19:42:21.338942   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | Getting to WaitForSSH function...
	I0531 19:42:21.338981   64814 main.go:141] libmachine: (old-k8s-version-422430) Reserved static IP address: 192.168.72.132
	I0531 19:42:21.339010   64814 main.go:141] libmachine: (old-k8s-version-422430) Waiting for SSH to be available...
	I0531 19:42:21.341852   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.342253   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:minikube Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:21.342283   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.342429   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | Using SSH client type: external
	I0531 19:42:21.342454   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | Using SSH private key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/old-k8s-version-422430/id_rsa (-rw-------)
	I0531 19:42:21.342483   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18995-11107/.minikube/machines/old-k8s-version-422430/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0531 19:42:21.342501   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | About to run SSH command:
	I0531 19:42:21.342514   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | exit 0
	I0531 19:42:21.464926   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | SSH cmd err, output: <nil>: 
	I0531 19:42:21.465218   64814 main.go:141] libmachine: (old-k8s-version-422430) KVM machine creation complete!
	I0531 19:42:21.465583   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetConfigRaw
	I0531 19:42:21.466121   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .DriverName
	I0531 19:42:21.466300   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .DriverName
	I0531 19:42:21.466430   64814 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0531 19:42:21.466445   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetState
	I0531 19:42:21.467731   64814 main.go:141] libmachine: Detecting operating system of created instance...
	I0531 19:42:21.467745   64814 main.go:141] libmachine: Waiting for SSH to be available...
	I0531 19:42:21.467750   64814 main.go:141] libmachine: Getting to WaitForSSH function...
	I0531 19:42:21.467756   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:21.469990   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.470307   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:21.470335   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.470419   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:21.470622   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:21.470797   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:21.470928   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:21.471100   64814 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:21.471272   64814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0531 19:42:21.471282   64814 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0531 19:42:21.568292   64814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:42:21.568316   64814 main.go:141] libmachine: Detecting the provisioner...
	I0531 19:42:21.568327   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:21.571651   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.571995   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:21.572023   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.572213   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:21.572429   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:21.572614   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:21.572794   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:21.572994   64814 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:21.573197   64814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0531 19:42:21.573210   64814 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0531 19:42:21.673293   64814 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0531 19:42:21.673362   64814 main.go:141] libmachine: found compatible host: buildroot
	I0531 19:42:21.673372   64814 main.go:141] libmachine: Provisioning with buildroot...
	I0531 19:42:21.673379   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetMachineName
	I0531 19:42:21.673611   64814 buildroot.go:166] provisioning hostname "old-k8s-version-422430"
	I0531 19:42:21.673641   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetMachineName
	I0531 19:42:21.673818   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:21.676657   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.677059   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:21.677086   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.677278   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:21.677471   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:21.677642   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:21.677807   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:21.677961   64814 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:21.678180   64814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0531 19:42:21.678199   64814 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-422430 && echo "old-k8s-version-422430" | sudo tee /etc/hostname
	I0531 19:42:21.793959   64814 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-422430
	
	I0531 19:42:21.793998   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:21.796716   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.797115   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:21.797141   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.797383   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:21.797556   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:21.797722   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:21.797867   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:21.798038   64814 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:21.798204   64814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0531 19:42:21.798219   64814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-422430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-422430/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-422430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0531 19:42:21.908187   64814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0531 19:42:21.908221   64814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18995-11107/.minikube CaCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18995-11107/.minikube}
	I0531 19:42:21.908252   64814 buildroot.go:174] setting up certificates
	I0531 19:42:21.908262   64814 provision.go:84] configureAuth start
	I0531 19:42:21.908271   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetMachineName
	I0531 19:42:21.908525   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetIP
	I0531 19:42:21.911085   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.911410   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:21.911431   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.911583   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:21.914159   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.914754   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:21.914794   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:21.914948   64814 provision.go:143] copyHostCerts
	I0531 19:42:21.915032   64814 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem, removing ...
	I0531 19:42:21.915049   64814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem
	I0531 19:42:21.915110   64814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/cert.pem (1123 bytes)
	I0531 19:42:21.915427   64814 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem, removing ...
	I0531 19:42:21.915444   64814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem
	I0531 19:42:21.915490   64814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/key.pem (1679 bytes)
	I0531 19:42:21.915611   64814 exec_runner.go:144] found /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem, removing ...
	I0531 19:42:21.915628   64814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem
	I0531 19:42:21.915664   64814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18995-11107/.minikube/ca.pem (1082 bytes)
	I0531 19:42:21.915760   64814 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-422430 san=[127.0.0.1 192.168.72.132 localhost minikube old-k8s-version-422430]
	I0531 19:42:22.328579   64814 provision.go:177] copyRemoteCerts
	I0531 19:42:22.328645   64814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:42:22.328685   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:22.331488   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.331794   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.331823   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.332027   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:22.332248   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:22.332386   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:22.332511   64814 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/old-k8s-version-422430/id_rsa Username:docker}
	I0531 19:42:22.411369   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 19:42:22.436337   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 19:42:22.460036   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0531 19:42:22.482571   64814 provision.go:87] duration metric: took 574.297903ms to configureAuth
	I0531 19:42:22.482595   64814 buildroot.go:189] setting minikube options for container-runtime
	I0531 19:42:22.482739   64814 config.go:182] Loaded profile config "old-k8s-version-422430": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0531 19:42:22.482813   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:22.485439   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.485802   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.485832   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.485925   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:22.486128   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:22.486320   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:22.486473   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:22.486650   64814 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:22.486850   64814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0531 19:42:22.486875   64814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:42:22.735113   64814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:42:22.735145   64814 main.go:141] libmachine: Checking connection to Docker...
	I0531 19:42:22.735157   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetURL
	I0531 19:42:22.736433   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | Using libvirt version 6000000
	I0531 19:42:22.738599   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.738913   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.738945   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.739112   64814 main.go:141] libmachine: Docker is up and running!
	I0531 19:42:22.739137   64814 main.go:141] libmachine: Reticulating splines...
	I0531 19:42:22.739145   64814 client.go:171] duration metric: took 24.38794577s to LocalClient.Create
	I0531 19:42:22.739172   64814 start.go:167] duration metric: took 24.388013984s to libmachine.API.Create "old-k8s-version-422430"
	I0531 19:42:22.739186   64814 start.go:293] postStartSetup for "old-k8s-version-422430" (driver="kvm2")
	I0531 19:42:22.739209   64814 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:42:22.739234   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .DriverName
	I0531 19:42:22.739470   64814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:42:22.739495   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:22.741593   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.741919   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.741960   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.742107   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:22.742265   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:22.742423   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:22.742567   64814 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/old-k8s-version-422430/id_rsa Username:docker}
	I0531 19:42:22.823568   64814 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:42:22.828050   64814 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 19:42:22.828071   64814 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 19:42:22.828136   64814 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 19:42:22.828226   64814 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 19:42:22.828345   64814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:42:22.837646   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:42:22.864910   64814 start.go:296] duration metric: took 125.710937ms for postStartSetup
	I0531 19:42:22.865001   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetConfigRaw
	I0531 19:42:22.865614   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetIP
	I0531 19:42:22.868442   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.868842   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.868871   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.869131   64814 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/config.json ...
	I0531 19:42:22.869319   64814 start.go:128] duration metric: took 24.542864602s to createHost
	I0531 19:42:22.869340   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:22.871930   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.872325   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.872355   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.872556   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:22.872738   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:22.872928   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:22.873105   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:22.873265   64814 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:22.873417   64814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.72.132 22 <nil> <nil>}
	I0531 19:42:22.873428   64814 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 19:42:22.969605   64814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717184542.924235400
	
	I0531 19:42:22.969621   64814 fix.go:216] guest clock: 1717184542.924235400
	I0531 19:42:22.969628   64814 fix.go:229] Guest: 2024-05-31 19:42:22.9242354 +0000 UTC Remote: 2024-05-31 19:42:22.869330352 +0000 UTC m=+42.807760724 (delta=54.905048ms)
	I0531 19:42:22.969645   64814 fix.go:200] guest clock delta is within tolerance: 54.905048ms
	I0531 19:42:22.969650   64814 start.go:83] releasing machines lock for "old-k8s-version-422430", held for 24.643351278s
	I0531 19:42:22.969674   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .DriverName
	I0531 19:42:22.969917   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetIP
	I0531 19:42:22.972723   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.973139   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.973160   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.973353   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .DriverName
	I0531 19:42:22.973837   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .DriverName
	I0531 19:42:22.974015   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .DriverName
	I0531 19:42:22.974105   64814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:42:22.974160   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:22.974207   64814 ssh_runner.go:195] Run: cat /version.json
	I0531 19:42:22.974230   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHHostname
	I0531 19:42:22.976869   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.977045   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.977272   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.977299   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.977455   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:22.977459   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:22.977481   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:22.977622   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHPort
	I0531 19:42:22.977636   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:22.977816   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHKeyPath
	I0531 19:42:22.977817   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:22.977984   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetSSHUsername
	I0531 19:42:22.977998   64814 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/old-k8s-version-422430/id_rsa Username:docker}
	I0531 19:42:22.978094   64814 sshutil.go:53] new ssh client: &{IP:192.168.72.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/old-k8s-version-422430/id_rsa Username:docker}
	I0531 19:42:23.053522   64814 ssh_runner.go:195] Run: systemctl --version
	I0531 19:42:23.089166   64814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:42:23.259092   64814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 19:42:23.265664   64814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 19:42:23.265714   64814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:42:23.282891   64814 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0531 19:42:23.282916   64814 start.go:494] detecting cgroup driver to use...
	I0531 19:42:23.282991   64814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:42:23.301342   64814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:42:23.315241   64814 docker.go:217] disabling cri-docker service (if available) ...
	I0531 19:42:23.315289   64814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:42:23.328579   64814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:42:23.342236   64814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:42:23.459474   64814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:42:23.601390   64814 docker.go:233] disabling docker service ...
	I0531 19:42:23.601500   64814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:42:23.617035   64814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:42:23.630246   64814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:42:23.776839   64814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:42:23.919495   64814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:42:23.935799   64814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:42:23.958631   64814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0531 19:42:23.958697   64814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:23.973773   64814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:42:23.973842   64814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:23.988810   64814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:24.000633   64814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:24.011369   64814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:42:24.022756   64814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:42:24.032424   64814 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0531 19:42:24.032466   64814 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0531 19:42:24.046764   64814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:42:24.058138   64814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:42:24.186355   64814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:42:24.344223   64814 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:42:24.344305   64814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:42:24.349512   64814 start.go:562] Will wait 60s for crictl version
	I0531 19:42:24.349555   64814 ssh_runner.go:195] Run: which crictl
	I0531 19:42:24.353644   64814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:42:24.395084   64814 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 19:42:24.395173   64814 ssh_runner.go:195] Run: crio --version
	I0531 19:42:24.422890   64814 ssh_runner.go:195] Run: crio --version
	I0531 19:42:24.453984   64814 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0531 19:42:24.455167   64814 main.go:141] libmachine: (old-k8s-version-422430) Calling .GetIP
	I0531 19:42:24.457795   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:24.458131   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:d3:55", ip: ""} in network mk-old-k8s-version-422430: {Iface:virbr4 ExpiryTime:2024-05-31 20:42:12 +0000 UTC Type:0 Mac:52:54:00:da:d3:55 Iaid: IPaddr:192.168.72.132 Prefix:24 Hostname:old-k8s-version-422430 Clientid:01:52:54:00:da:d3:55}
	I0531 19:42:24.458157   64814 main.go:141] libmachine: (old-k8s-version-422430) DBG | domain old-k8s-version-422430 has defined IP address 192.168.72.132 and MAC address 52:54:00:da:d3:55 in network mk-old-k8s-version-422430
	I0531 19:42:24.458303   64814 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0531 19:42:24.462131   64814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:42:24.474198   64814 kubeadm.go:877] updating cluster {Name:old-k8s-version-422430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-422430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 19:42:24.474321   64814 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0531 19:42:24.474382   64814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:42:24.505113   64814 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0531 19:42:24.505180   64814 ssh_runner.go:195] Run: which lz4
	I0531 19:42:24.508782   64814 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0531 19:42:24.512726   64814 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0531 19:42:24.512755   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0531 19:42:23.620415   65042 provision.go:177] copyRemoteCerts
	I0531 19:42:23.620459   65042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0531 19:42:23.620479   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:23.623134   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.623433   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:23.623467   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.623680   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:23.623909   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:23.624089   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:23.624240   65042 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa Username:docker}
	I0531 19:42:23.707479   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0531 19:42:23.739469   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0531 19:42:23.764543   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0531 19:42:23.794738   65042 provision.go:87] duration metric: took 444.571217ms to configureAuth
	I0531 19:42:23.794769   65042 buildroot.go:189] setting minikube options for container-runtime
	I0531 19:42:23.794961   65042 config.go:182] Loaded profile config "kubernetes-upgrade-343216": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:42:23.795063   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:23.797959   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.798337   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:23.798373   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:23.798488   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:23.798687   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:23.798831   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:23.799005   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:23.799159   65042 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:23.799382   65042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:42:23.799409   65042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0531 19:42:30.090563   65227 start.go:364] duration metric: took 28.651282837s to acquireMachinesLock for "no-preload-696038"
	I0531 19:42:30.090616   65227 start.go:93] Provisioning new machine with config: &{Name:no-preload-696038 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.1 ClusterName:no-preload-696038 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0531 19:42:30.090717   65227 start.go:125] createHost starting for "" (driver="kvm2")
	I0531 19:42:26.093576   64814 crio.go:462] duration metric: took 1.584815387s to copy over tarball
	I0531 19:42:26.093646   64814 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0531 19:42:28.524003   64814 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.430323382s)
	I0531 19:42:28.524030   64814 crio.go:469] duration metric: took 2.430428266s to extract the tarball
	I0531 19:42:28.524037   64814 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0531 19:42:28.569519   64814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:42:28.620726   64814 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0531 19:42:28.620753   64814 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0531 19:42:28.620840   64814 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:42:28.620845   64814 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:42:28.620889   64814 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0531 19:42:28.620918   64814 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:42:28.620972   64814 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0531 19:42:28.620971   64814 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:42:28.621005   64814 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0531 19:42:28.620857   64814 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:42:28.622289   64814 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:42:28.622298   64814 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0531 19:42:28.622289   64814 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0531 19:42:28.622326   64814 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:42:28.622294   64814 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0531 19:42:28.622422   64814 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:42:28.622326   64814 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:42:28.622664   64814 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:42:28.765200   64814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:42:28.777428   64814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:42:28.779163   64814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0531 19:42:28.781073   64814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:42:28.798145   64814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0531 19:42:28.828643   64814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0531 19:42:28.842741   64814 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0531 19:42:28.842785   64814 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:42:28.842841   64814 ssh_runner.go:195] Run: which crictl
	I0531 19:42:28.924857   64814 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0531 19:42:28.924889   64814 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:42:28.924932   64814 ssh_runner.go:195] Run: which crictl
	I0531 19:42:28.924992   64814 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0531 19:42:28.925029   64814 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0531 19:42:28.925063   64814 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:42:28.925118   64814 ssh_runner.go:195] Run: which crictl
	I0531 19:42:28.925038   64814 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0531 19:42:28.925179   64814 ssh_runner.go:195] Run: which crictl
	I0531 19:42:28.934413   64814 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0531 19:42:28.934438   64814 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0531 19:42:28.934468   64814 ssh_runner.go:195] Run: which crictl
	I0531 19:42:28.934481   64814 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0531 19:42:28.934513   64814 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0531 19:42:28.934522   64814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0531 19:42:28.934541   64814 ssh_runner.go:195] Run: which crictl
	I0531 19:42:28.934572   64814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0531 19:42:28.935893   64814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:42:28.935922   64814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0531 19:42:28.936086   64814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0531 19:42:28.938933   64814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0531 19:42:29.052968   64814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0531 19:42:29.053018   64814 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0531 19:42:29.053040   64814 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0531 19:42:29.088787   64814 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0531 19:42:29.088832   64814 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:42:29.088847   64814 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0531 19:42:29.088884   64814 ssh_runner.go:195] Run: which crictl
	I0531 19:42:29.088936   64814 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0531 19:42:29.089026   64814 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0531 19:42:29.110743   64814 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0531 19:42:29.110760   64814 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0531 19:42:29.147138   64814 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0531 19:42:29.615517   64814 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0531 19:42:29.758830   64814 cache_images.go:92] duration metric: took 1.138059771s to LoadCachedImages
	W0531 19:42:29.758932   64814 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18995-11107/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0531 19:42:29.758957   64814 kubeadm.go:928] updating node { 192.168.72.132 8443 v1.20.0 crio true true} ...
	I0531 19:42:29.759091   64814 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-422430 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.132
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-422430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 19:42:29.759182   64814 ssh_runner.go:195] Run: crio config
	I0531 19:42:29.809965   64814 cni.go:84] Creating CNI manager for ""
	I0531 19:42:29.809994   64814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 19:42:29.810010   64814 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 19:42:29.810036   64814 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.132 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-422430 NodeName:old-k8s-version-422430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0531 19:42:29.810203   64814 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-422430"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:42:29.810277   64814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0531 19:42:29.821730   64814 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:42:29.821796   64814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:42:29.835645   64814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0531 19:42:29.856110   64814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:42:29.875521   64814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0531 19:42:29.893103   64814 ssh_runner.go:195] Run: grep 192.168.72.132	control-plane.minikube.internal$ /etc/hosts
	I0531 19:42:29.897063   64814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0531 19:42:29.910671   64814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:42:30.044855   64814 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 19:42:30.065674   64814 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430 for IP: 192.168.72.132
	I0531 19:42:30.065694   64814 certs.go:194] generating shared ca certs ...
	I0531 19:42:30.065715   64814 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:30.065882   64814 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 19:42:30.065937   64814 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 19:42:30.065950   64814 certs.go:256] generating profile certs ...
	I0531 19:42:30.066018   64814 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/client.key
	I0531 19:42:30.066035   64814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/client.crt with IP's: []
	I0531 19:42:30.165735   65227 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0531 19:42:30.166009   65227 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:42:30.166060   65227 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:42:30.182463   65227 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35209
	I0531 19:42:30.182926   65227 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:42:30.183606   65227 main.go:141] libmachine: Using API Version  1
	I0531 19:42:30.183632   65227 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:42:30.184036   65227 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:42:30.184292   65227 main.go:141] libmachine: (no-preload-696038) Calling .GetMachineName
	I0531 19:42:30.184417   65227 main.go:141] libmachine: (no-preload-696038) Calling .DriverName
	I0531 19:42:30.184564   65227 start.go:159] libmachine.API.Create for "no-preload-696038" (driver="kvm2")
	I0531 19:42:30.184590   65227 client.go:168] LocalClient.Create starting
	I0531 19:42:30.184628   65227 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem
	I0531 19:42:30.184672   65227 main.go:141] libmachine: Decoding PEM data...
	I0531 19:42:30.184693   65227 main.go:141] libmachine: Parsing certificate...
	I0531 19:42:30.184766   65227 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem
	I0531 19:42:30.184797   65227 main.go:141] libmachine: Decoding PEM data...
	I0531 19:42:30.184810   65227 main.go:141] libmachine: Parsing certificate...
	I0531 19:42:30.184837   65227 main.go:141] libmachine: Running pre-create checks...
	I0531 19:42:30.184850   65227 main.go:141] libmachine: (no-preload-696038) Calling .PreCreateCheck
	I0531 19:42:30.185221   65227 main.go:141] libmachine: (no-preload-696038) Calling .GetConfigRaw
	I0531 19:42:30.208309   65227 main.go:141] libmachine: Creating machine...
	I0531 19:42:30.208329   65227 main.go:141] libmachine: (no-preload-696038) Calling .Create
	I0531 19:42:30.208499   65227 main.go:141] libmachine: (no-preload-696038) Creating KVM machine...
	I0531 19:42:30.209728   65227 main.go:141] libmachine: (no-preload-696038) DBG | found existing default KVM network
	I0531 19:42:30.211473   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:30.211311   65440 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0e0}
	I0531 19:42:30.211506   65227 main.go:141] libmachine: (no-preload-696038) DBG | created network xml: 
	I0531 19:42:30.211520   65227 main.go:141] libmachine: (no-preload-696038) DBG | <network>
	I0531 19:42:30.211533   65227 main.go:141] libmachine: (no-preload-696038) DBG |   <name>mk-no-preload-696038</name>
	I0531 19:42:30.211543   65227 main.go:141] libmachine: (no-preload-696038) DBG |   <dns enable='no'/>
	I0531 19:42:30.211550   65227 main.go:141] libmachine: (no-preload-696038) DBG |   
	I0531 19:42:30.211565   65227 main.go:141] libmachine: (no-preload-696038) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0531 19:42:30.211575   65227 main.go:141] libmachine: (no-preload-696038) DBG |     <dhcp>
	I0531 19:42:30.211585   65227 main.go:141] libmachine: (no-preload-696038) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0531 19:42:30.211592   65227 main.go:141] libmachine: (no-preload-696038) DBG |     </dhcp>
	I0531 19:42:30.211602   65227 main.go:141] libmachine: (no-preload-696038) DBG |   </ip>
	I0531 19:42:30.211609   65227 main.go:141] libmachine: (no-preload-696038) DBG |   
	I0531 19:42:30.211616   65227 main.go:141] libmachine: (no-preload-696038) DBG | </network>
	I0531 19:42:30.211623   65227 main.go:141] libmachine: (no-preload-696038) DBG | 
	I0531 19:42:30.264529   65227 main.go:141] libmachine: (no-preload-696038) DBG | trying to create private KVM network mk-no-preload-696038 192.168.39.0/24...
	I0531 19:42:30.347539   65227 main.go:141] libmachine: (no-preload-696038) DBG | private KVM network mk-no-preload-696038 192.168.39.0/24 created
	I0531 19:42:30.347620   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:30.347476   65440 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:42:30.347646   65227 main.go:141] libmachine: (no-preload-696038) Setting up store path in /home/jenkins/minikube-integration/18995-11107/.minikube/machines/no-preload-696038 ...
	I0531 19:42:30.347664   65227 main.go:141] libmachine: (no-preload-696038) Building disk image from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 19:42:30.347784   65227 main.go:141] libmachine: (no-preload-696038) Downloading /home/jenkins/minikube-integration/18995-11107/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso...
	I0531 19:42:30.614941   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:30.614832   65440 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/no-preload-696038/id_rsa...
	I0531 19:42:30.812150   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:30.812028   65440 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/no-preload-696038/no-preload-696038.rawdisk...
	I0531 19:42:30.812182   65227 main.go:141] libmachine: (no-preload-696038) DBG | Writing magic tar header
	I0531 19:42:30.812201   65227 main.go:141] libmachine: (no-preload-696038) DBG | Writing SSH key tar header
	I0531 19:42:30.812254   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:30.812205   65440 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/no-preload-696038 ...
	I0531 19:42:30.812382   65227 main.go:141] libmachine: (no-preload-696038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines/no-preload-696038
	I0531 19:42:30.812414   65227 main.go:141] libmachine: (no-preload-696038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube/machines
	I0531 19:42:30.812430   65227 main.go:141] libmachine: (no-preload-696038) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines/no-preload-696038 (perms=drwx------)
	I0531 19:42:30.812445   65227 main.go:141] libmachine: (no-preload-696038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 19:42:30.812462   65227 main.go:141] libmachine: (no-preload-696038) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube/machines (perms=drwxr-xr-x)
	I0531 19:42:30.812482   65227 main.go:141] libmachine: (no-preload-696038) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107/.minikube (perms=drwxr-xr-x)
	I0531 19:42:30.812519   65227 main.go:141] libmachine: (no-preload-696038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18995-11107
	I0531 19:42:30.812531   65227 main.go:141] libmachine: (no-preload-696038) Setting executable bit set on /home/jenkins/minikube-integration/18995-11107 (perms=drwxrwxr-x)
	I0531 19:42:30.812540   65227 main.go:141] libmachine: (no-preload-696038) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0531 19:42:30.812558   65227 main.go:141] libmachine: (no-preload-696038) DBG | Checking permissions on dir: /home/jenkins
	I0531 19:42:30.812572   65227 main.go:141] libmachine: (no-preload-696038) DBG | Checking permissions on dir: /home
	I0531 19:42:30.812585   65227 main.go:141] libmachine: (no-preload-696038) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0531 19:42:30.812608   65227 main.go:141] libmachine: (no-preload-696038) DBG | Skipping /home - not owner
	I0531 19:42:30.812629   65227 main.go:141] libmachine: (no-preload-696038) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0531 19:42:30.812641   65227 main.go:141] libmachine: (no-preload-696038) Creating domain...
	I0531 19:42:30.813782   65227 main.go:141] libmachine: (no-preload-696038) define libvirt domain using xml: 
	I0531 19:42:30.813806   65227 main.go:141] libmachine: (no-preload-696038) <domain type='kvm'>
	I0531 19:42:30.813818   65227 main.go:141] libmachine: (no-preload-696038)   <name>no-preload-696038</name>
	I0531 19:42:30.813832   65227 main.go:141] libmachine: (no-preload-696038)   <memory unit='MiB'>2200</memory>
	I0531 19:42:30.813845   65227 main.go:141] libmachine: (no-preload-696038)   <vcpu>2</vcpu>
	I0531 19:42:30.813856   65227 main.go:141] libmachine: (no-preload-696038)   <features>
	I0531 19:42:30.813867   65227 main.go:141] libmachine: (no-preload-696038)     <acpi/>
	I0531 19:42:30.813878   65227 main.go:141] libmachine: (no-preload-696038)     <apic/>
	I0531 19:42:30.813886   65227 main.go:141] libmachine: (no-preload-696038)     <pae/>
	I0531 19:42:30.813895   65227 main.go:141] libmachine: (no-preload-696038)     
	I0531 19:42:30.813901   65227 main.go:141] libmachine: (no-preload-696038)   </features>
	I0531 19:42:30.813907   65227 main.go:141] libmachine: (no-preload-696038)   <cpu mode='host-passthrough'>
	I0531 19:42:30.813912   65227 main.go:141] libmachine: (no-preload-696038)   
	I0531 19:42:30.813924   65227 main.go:141] libmachine: (no-preload-696038)   </cpu>
	I0531 19:42:30.813931   65227 main.go:141] libmachine: (no-preload-696038)   <os>
	I0531 19:42:30.813936   65227 main.go:141] libmachine: (no-preload-696038)     <type>hvm</type>
	I0531 19:42:30.813943   65227 main.go:141] libmachine: (no-preload-696038)     <boot dev='cdrom'/>
	I0531 19:42:30.813948   65227 main.go:141] libmachine: (no-preload-696038)     <boot dev='hd'/>
	I0531 19:42:30.813955   65227 main.go:141] libmachine: (no-preload-696038)     <bootmenu enable='no'/>
	I0531 19:42:30.813960   65227 main.go:141] libmachine: (no-preload-696038)   </os>
	I0531 19:42:30.813967   65227 main.go:141] libmachine: (no-preload-696038)   <devices>
	I0531 19:42:30.813972   65227 main.go:141] libmachine: (no-preload-696038)     <disk type='file' device='cdrom'>
	I0531 19:42:30.813981   65227 main.go:141] libmachine: (no-preload-696038)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/no-preload-696038/boot2docker.iso'/>
	I0531 19:42:30.813990   65227 main.go:141] libmachine: (no-preload-696038)       <target dev='hdc' bus='scsi'/>
	I0531 19:42:30.813998   65227 main.go:141] libmachine: (no-preload-696038)       <readonly/>
	I0531 19:42:30.814004   65227 main.go:141] libmachine: (no-preload-696038)     </disk>
	I0531 19:42:30.814011   65227 main.go:141] libmachine: (no-preload-696038)     <disk type='file' device='disk'>
	I0531 19:42:30.814019   65227 main.go:141] libmachine: (no-preload-696038)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0531 19:42:30.814035   65227 main.go:141] libmachine: (no-preload-696038)       <source file='/home/jenkins/minikube-integration/18995-11107/.minikube/machines/no-preload-696038/no-preload-696038.rawdisk'/>
	I0531 19:42:30.814042   65227 main.go:141] libmachine: (no-preload-696038)       <target dev='hda' bus='virtio'/>
	I0531 19:42:30.814048   65227 main.go:141] libmachine: (no-preload-696038)     </disk>
	I0531 19:42:30.814055   65227 main.go:141] libmachine: (no-preload-696038)     <interface type='network'>
	I0531 19:42:30.814060   65227 main.go:141] libmachine: (no-preload-696038)       <source network='mk-no-preload-696038'/>
	I0531 19:42:30.814067   65227 main.go:141] libmachine: (no-preload-696038)       <model type='virtio'/>
	I0531 19:42:30.814072   65227 main.go:141] libmachine: (no-preload-696038)     </interface>
	I0531 19:42:30.814081   65227 main.go:141] libmachine: (no-preload-696038)     <interface type='network'>
	I0531 19:42:30.814087   65227 main.go:141] libmachine: (no-preload-696038)       <source network='default'/>
	I0531 19:42:30.814092   65227 main.go:141] libmachine: (no-preload-696038)       <model type='virtio'/>
	I0531 19:42:30.814097   65227 main.go:141] libmachine: (no-preload-696038)     </interface>
	I0531 19:42:30.814104   65227 main.go:141] libmachine: (no-preload-696038)     <serial type='pty'>
	I0531 19:42:30.814109   65227 main.go:141] libmachine: (no-preload-696038)       <target port='0'/>
	I0531 19:42:30.814117   65227 main.go:141] libmachine: (no-preload-696038)     </serial>
	I0531 19:42:30.814122   65227 main.go:141] libmachine: (no-preload-696038)     <console type='pty'>
	I0531 19:42:30.814129   65227 main.go:141] libmachine: (no-preload-696038)       <target type='serial' port='0'/>
	I0531 19:42:30.814134   65227 main.go:141] libmachine: (no-preload-696038)     </console>
	I0531 19:42:30.814138   65227 main.go:141] libmachine: (no-preload-696038)     <rng model='virtio'>
	I0531 19:42:30.814144   65227 main.go:141] libmachine: (no-preload-696038)       <backend model='random'>/dev/random</backend>
	I0531 19:42:30.814147   65227 main.go:141] libmachine: (no-preload-696038)     </rng>
	I0531 19:42:30.814152   65227 main.go:141] libmachine: (no-preload-696038)     
	I0531 19:42:30.814156   65227 main.go:141] libmachine: (no-preload-696038)     
	I0531 19:42:30.814161   65227 main.go:141] libmachine: (no-preload-696038)   </devices>
	I0531 19:42:30.814164   65227 main.go:141] libmachine: (no-preload-696038) </domain>
	I0531 19:42:30.814170   65227 main.go:141] libmachine: (no-preload-696038) 
	I0531 19:42:30.916757   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:e3:86:54 in network default
	I0531 19:42:30.917567   65227 main.go:141] libmachine: (no-preload-696038) Ensuring networks are active...
	I0531 19:42:30.917611   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:30.918462   65227 main.go:141] libmachine: (no-preload-696038) Ensuring network default is active
	I0531 19:42:30.918857   65227 main.go:141] libmachine: (no-preload-696038) Ensuring network mk-no-preload-696038 is active
	I0531 19:42:30.919529   65227 main.go:141] libmachine: (no-preload-696038) Getting domain xml...
	I0531 19:42:30.920391   65227 main.go:141] libmachine: (no-preload-696038) Creating domain...
	I0531 19:42:30.178224   64814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/client.crt ...
	I0531 19:42:30.227081   64814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/client.crt: {Name:mk7c83cf66059fd00ad8ce601dd3cfa355541d69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:30.263651   64814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/client.key ...
	I0531 19:42:30.263701   64814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/client.key: {Name:mkfe32c502a6bec573ed760293464f5cb8231e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:30.263877   64814 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.key.6cf98f0d
	I0531 19:42:30.263904   64814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.crt.6cf98f0d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.132]
	I0531 19:42:30.591653   64814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.crt.6cf98f0d ...
	I0531 19:42:30.591681   64814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.crt.6cf98f0d: {Name:mk9b553a52776d30d626b16c6b62775199327180 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:30.591856   64814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.key.6cf98f0d ...
	I0531 19:42:30.591874   64814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.key.6cf98f0d: {Name:mk085d42d67a58e170caa86a71c2b87e63ecebaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:30.591975   64814 certs.go:381] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.crt.6cf98f0d -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.crt
	I0531 19:42:30.592068   64814 certs.go:385] copying /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.key.6cf98f0d -> /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.key
	I0531 19:42:30.592144   64814 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/proxy-client.key
	I0531 19:42:30.592164   64814 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/proxy-client.crt with IP's: []
	I0531 19:42:30.843868   64814 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/proxy-client.crt ...
	I0531 19:42:30.843919   64814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/proxy-client.crt: {Name:mk7d93821236c9abae027d266f4150d9eebefd15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:30.844090   64814 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/proxy-client.key ...
	I0531 19:42:30.844106   64814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/proxy-client.key: {Name:mkcecb32705473ef036cb60319b1196f8886f27b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:30.844318   64814 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 19:42:30.844365   64814 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 19:42:30.844380   64814 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:42:30.844412   64814 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 19:42:30.844443   64814 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:42:30.844474   64814 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 19:42:30.844535   64814 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:42:30.845227   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:42:30.873750   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:42:30.899129   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:42:30.927873   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:42:30.971374   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0531 19:42:31.010065   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 19:42:31.045527   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:42:31.087030   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/old-k8s-version-422430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0531 19:42:31.115643   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 19:42:31.142859   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 19:42:31.171905   64814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:42:31.202320   64814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:42:31.221821   64814 ssh_runner.go:195] Run: openssl version
	I0531 19:42:31.228457   64814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 19:42:31.240309   64814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 19:42:31.245171   64814 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:42:31.245232   64814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 19:42:31.251369   64814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:42:31.262669   64814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:42:31.276003   64814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:42:31.280894   64814 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:42:31.280941   64814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:42:31.287932   64814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:42:31.299146   64814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 19:42:31.311206   64814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 19:42:31.317108   64814 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:42:31.317151   64814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 19:42:31.325045   64814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 19:42:31.339271   64814 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:42:31.345007   64814 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0531 19:42:31.345064   64814 kubeadm.go:391] StartCluster: {Name:old-k8s-version-422430 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-422430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.132 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:42:31.345156   64814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:42:31.345211   64814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:42:31.395717   64814 cri.go:89] found id: ""
	I0531 19:42:31.395793   64814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0531 19:42:31.406432   64814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0531 19:42:31.416567   64814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0531 19:42:31.426781   64814 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0531 19:42:31.426802   64814 kubeadm.go:156] found existing configuration files:
	
	I0531 19:42:31.426849   64814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0531 19:42:31.436340   64814 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0531 19:42:31.436394   64814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0531 19:42:31.445839   64814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0531 19:42:31.457017   64814 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0531 19:42:31.457077   64814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0531 19:42:31.466764   64814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0531 19:42:31.475996   64814 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0531 19:42:31.476057   64814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0531 19:42:31.485702   64814 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0531 19:42:31.494966   64814 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0531 19:42:31.495026   64814 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0531 19:42:31.504920   64814 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0531 19:42:31.629359   64814 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0531 19:42:31.629631   64814 kubeadm.go:309] [preflight] Running pre-flight checks
	I0531 19:42:31.804523   64814 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0531 19:42:31.804716   64814 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0531 19:42:31.804839   64814 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0531 19:42:32.048766   64814 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0531 19:42:29.837589   65042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0531 19:42:29.837617   65042 machine.go:97] duration metric: took 6.844082317s to provisionDockerMachine
	I0531 19:42:29.837631   65042 start.go:293] postStartSetup for "kubernetes-upgrade-343216" (driver="kvm2")
	I0531 19:42:29.837661   65042 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0531 19:42:29.837688   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:42:29.838024   65042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0531 19:42:29.838052   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:29.840792   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:29.841197   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:29.841225   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:29.841429   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:29.841614   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:29.841803   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:29.841954   65042 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa Username:docker}
	I0531 19:42:29.931665   65042 ssh_runner.go:195] Run: cat /etc/os-release
	I0531 19:42:29.936988   65042 info.go:137] Remote host: Buildroot 2023.02.9
	I0531 19:42:29.937009   65042 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/addons for local assets ...
	I0531 19:42:29.937080   65042 filesync.go:126] Scanning /home/jenkins/minikube-integration/18995-11107/.minikube/files for local assets ...
	I0531 19:42:29.937198   65042 filesync.go:149] local asset: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem -> 183722.pem in /etc/ssl/certs
	I0531 19:42:29.937293   65042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0531 19:42:29.951305   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:42:29.981053   65042 start.go:296] duration metric: took 143.40735ms for postStartSetup
	I0531 19:42:29.981098   65042 fix.go:56] duration metric: took 7.011308695s for fixHost
	I0531 19:42:29.981122   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:29.983917   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:29.984342   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:29.984373   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:29.984538   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:29.984722   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:29.984848   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:29.985013   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:29.985212   65042 main.go:141] libmachine: Using SSH client type: native
	I0531 19:42:29.985401   65042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I0531 19:42:29.985411   65042 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0531 19:42:30.090404   65042 main.go:141] libmachine: SSH cmd err, output: <nil>: 1717184550.084364095
	
	I0531 19:42:30.090434   65042 fix.go:216] guest clock: 1717184550.084364095
	I0531 19:42:30.090445   65042 fix.go:229] Guest: 2024-05-31 19:42:30.084364095 +0000 UTC Remote: 2024-05-31 19:42:29.981102666 +0000 UTC m=+31.532562363 (delta=103.261429ms)
	I0531 19:42:30.090476   65042 fix.go:200] guest clock delta is within tolerance: 103.261429ms
	I0531 19:42:30.090487   65042 start.go:83] releasing machines lock for "kubernetes-upgrade-343216", held for 7.120719473s
	I0531 19:42:30.090521   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:42:30.090825   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetIP
	I0531 19:42:30.093905   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:30.094309   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:30.094346   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:30.094541   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:42:30.095045   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:42:30.095241   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .DriverName
	I0531 19:42:30.095347   65042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0531 19:42:30.095393   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:30.095620   65042 ssh_runner.go:195] Run: cat /version.json
	I0531 19:42:30.095642   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHHostname
	I0531 19:42:30.099163   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:30.099572   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:30.099594   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:30.099830   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:30.099944   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:30.099985   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:30.100184   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:30.100340   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:30.100363   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:30.100358   65042 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa Username:docker}
	I0531 19:42:30.100522   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHPort
	I0531 19:42:30.100704   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHKeyPath
	I0531 19:42:30.100869   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetSSHUsername
	I0531 19:42:30.101035   65042 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/kubernetes-upgrade-343216/id_rsa Username:docker}
	I0531 19:42:30.211239   65042 ssh_runner.go:195] Run: systemctl --version
	I0531 19:42:30.218008   65042 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0531 19:42:30.389796   65042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0531 19:42:30.403005   65042 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0531 19:42:30.403077   65042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0531 19:42:30.417815   65042 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0531 19:42:30.417838   65042 start.go:494] detecting cgroup driver to use...
	I0531 19:42:30.417901   65042 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0531 19:42:30.443826   65042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0531 19:42:30.464030   65042 docker.go:217] disabling cri-docker service (if available) ...
	I0531 19:42:30.464110   65042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0531 19:42:30.485568   65042 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0531 19:42:30.502819   65042 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0531 19:42:30.667853   65042 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0531 19:42:30.819273   65042 docker.go:233] disabling docker service ...
	I0531 19:42:30.819337   65042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0531 19:42:30.836946   65042 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0531 19:42:30.852508   65042 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0531 19:42:31.010156   65042 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0531 19:42:31.171048   65042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0531 19:42:31.186817   65042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0531 19:42:31.210700   65042 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0531 19:42:31.210767   65042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:31.222659   65042 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0531 19:42:31.222750   65042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:31.237099   65042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:31.250063   65042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:31.262794   65042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0531 19:42:31.275731   65042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:31.287417   65042 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:31.299146   65042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0531 19:42:31.312536   65042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0531 19:42:31.329694   65042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0531 19:42:31.343633   65042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:42:31.496654   65042 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0531 19:42:32.252080   64814 out.go:204]   - Generating certificates and keys ...
	I0531 19:42:32.252179   64814 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0531 19:42:32.252270   64814 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0531 19:42:32.252375   64814 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0531 19:42:32.501126   64814 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0531 19:42:32.620089   64814 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0531 19:42:32.878895   64814 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0531 19:42:32.933282   64814 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0531 19:42:32.933617   64814 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-422430] and IPs [192.168.72.132 127.0.0.1 ::1]
	I0531 19:42:33.045865   64814 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0531 19:42:33.046138   64814 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-422430] and IPs [192.168.72.132 127.0.0.1 ::1]
	I0531 19:42:33.300386   64814 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0531 19:42:33.549091   64814 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0531 19:42:33.620557   64814 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0531 19:42:33.620733   64814 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0531 19:42:33.914991   64814 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0531 19:42:34.066572   64814 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0531 19:42:34.129031   64814 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0531 19:42:34.217657   64814 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0531 19:42:34.234718   64814 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0531 19:42:34.235408   64814 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0531 19:42:34.235492   64814 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0531 19:42:34.388144   64814 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0531 19:42:34.390253   64814 out.go:204]   - Booting up control plane ...
	I0531 19:42:34.390363   64814 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0531 19:42:34.399396   64814 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0531 19:42:34.401414   64814 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0531 19:42:34.408392   64814 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0531 19:42:34.415899   64814 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0531 19:42:32.562214   65227 main.go:141] libmachine: (no-preload-696038) Waiting to get IP...
	I0531 19:42:32.563047   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:32.563497   65227 main.go:141] libmachine: (no-preload-696038) DBG | unable to find current IP address of domain no-preload-696038 in network mk-no-preload-696038
	I0531 19:42:32.563542   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:32.563482   65440 retry.go:31] will retry after 245.003254ms: waiting for machine to come up
	I0531 19:42:32.810025   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:32.810650   65227 main.go:141] libmachine: (no-preload-696038) DBG | unable to find current IP address of domain no-preload-696038 in network mk-no-preload-696038
	I0531 19:42:32.810671   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:32.810614   65440 retry.go:31] will retry after 316.674977ms: waiting for machine to come up
	I0531 19:42:33.129223   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:33.129725   65227 main.go:141] libmachine: (no-preload-696038) DBG | unable to find current IP address of domain no-preload-696038 in network mk-no-preload-696038
	I0531 19:42:33.129755   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:33.129677   65440 retry.go:31] will retry after 358.137719ms: waiting for machine to come up
	I0531 19:42:33.489074   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:33.489591   65227 main.go:141] libmachine: (no-preload-696038) DBG | unable to find current IP address of domain no-preload-696038 in network mk-no-preload-696038
	I0531 19:42:33.489619   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:33.489552   65440 retry.go:31] will retry after 430.305401ms: waiting for machine to come up
	I0531 19:42:33.921812   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:33.922245   65227 main.go:141] libmachine: (no-preload-696038) DBG | unable to find current IP address of domain no-preload-696038 in network mk-no-preload-696038
	I0531 19:42:33.922274   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:33.922204   65440 retry.go:31] will retry after 565.032105ms: waiting for machine to come up
	I0531 19:42:34.488950   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:34.489398   65227 main.go:141] libmachine: (no-preload-696038) DBG | unable to find current IP address of domain no-preload-696038 in network mk-no-preload-696038
	I0531 19:42:34.489471   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:34.489363   65440 retry.go:31] will retry after 677.865898ms: waiting for machine to come up
	I0531 19:42:35.169133   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:35.169675   65227 main.go:141] libmachine: (no-preload-696038) DBG | unable to find current IP address of domain no-preload-696038 in network mk-no-preload-696038
	I0531 19:42:35.169706   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:35.169626   65440 retry.go:31] will retry after 1.160384628s: waiting for machine to come up
	I0531 19:42:36.332334   65227 main.go:141] libmachine: (no-preload-696038) DBG | domain no-preload-696038 has defined MAC address 52:54:00:56:93:52 in network mk-no-preload-696038
	I0531 19:42:36.332877   65227 main.go:141] libmachine: (no-preload-696038) DBG | unable to find current IP address of domain no-preload-696038 in network mk-no-preload-696038
	I0531 19:42:36.332908   65227 main.go:141] libmachine: (no-preload-696038) DBG | I0531 19:42:36.332826   65440 retry.go:31] will retry after 1.40968366s: waiting for machine to come up
	I0531 19:42:37.109109   65042 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.612427224s)
	I0531 19:42:37.109135   65042 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0531 19:42:37.109176   65042 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0531 19:42:37.114729   65042 start.go:562] Will wait 60s for crictl version
	I0531 19:42:37.114803   65042 ssh_runner.go:195] Run: which crictl
	I0531 19:42:37.119028   65042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0531 19:42:37.160119   65042 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0531 19:42:37.160200   65042 ssh_runner.go:195] Run: crio --version
	I0531 19:42:37.188979   65042 ssh_runner.go:195] Run: crio --version
	I0531 19:42:37.219744   65042 out.go:177] * Preparing Kubernetes v1.30.1 on CRI-O 1.29.1 ...
	I0531 19:42:37.221241   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) Calling .GetIP
	I0531 19:42:37.224155   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:37.224735   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:d2:fc", ip: ""} in network mk-kubernetes-upgrade-343216: {Iface:virbr2 ExpiryTime:2024-05-31 20:41:30 +0000 UTC Type:0 Mac:52:54:00:e1:d2:fc Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:kubernetes-upgrade-343216 Clientid:01:52:54:00:e1:d2:fc}
	I0531 19:42:37.224766   65042 main.go:141] libmachine: (kubernetes-upgrade-343216) DBG | domain kubernetes-upgrade-343216 has defined IP address 192.168.50.31 and MAC address 52:54:00:e1:d2:fc in network mk-kubernetes-upgrade-343216
	I0531 19:42:37.225060   65042 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0531 19:42:37.229745   65042 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-343216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.1 ClusterName:kubernetes-upgrade-343216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0531 19:42:37.229875   65042 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 19:42:37.229916   65042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:42:37.276995   65042 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:42:37.277022   65042 crio.go:433] Images already preloaded, skipping extraction
	I0531 19:42:37.277068   65042 ssh_runner.go:195] Run: sudo crictl images --output json
	I0531 19:42:37.309260   65042 crio.go:514] all images are preloaded for cri-o runtime.
	I0531 19:42:37.309286   65042 cache_images.go:84] Images are preloaded, skipping loading
	I0531 19:42:37.309295   65042 kubeadm.go:928] updating node { 192.168.50.31 8443 v1.30.1 crio true true} ...
	I0531 19:42:37.309420   65042 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-343216 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:kubernetes-upgrade-343216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0531 19:42:37.309504   65042 ssh_runner.go:195] Run: crio config
	I0531 19:42:37.364351   65042 cni.go:84] Creating CNI manager for ""
	I0531 19:42:37.364370   65042 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 19:42:37.364378   65042 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0531 19:42:37.364398   65042 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.31 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-343216 NodeName:kubernetes-upgrade-343216 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.31"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0531 19:42:37.364529   65042 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-343216"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.31"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0531 19:42:37.364584   65042 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0531 19:42:37.377440   65042 binaries.go:44] Found k8s binaries, skipping transfer
	I0531 19:42:37.377496   65042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0531 19:42:37.389106   65042 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0531 19:42:37.408394   65042 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0531 19:42:37.427027   65042 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0531 19:42:37.444171   65042 ssh_runner.go:195] Run: grep 192.168.50.31	control-plane.minikube.internal$ /etc/hosts
	I0531 19:42:37.448457   65042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0531 19:42:37.593748   65042 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0531 19:42:37.610084   65042 certs.go:68] Setting up /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216 for IP: 192.168.50.31
	I0531 19:42:37.610117   65042 certs.go:194] generating shared ca certs ...
	I0531 19:42:37.610137   65042 certs.go:226] acquiring lock for ca certs: {Name:mk65ed4a83c7ef16c9a350c84758769c694e3baf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 19:42:37.610318   65042 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key
	I0531 19:42:37.610371   65042 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key
	I0531 19:42:37.610383   65042 certs.go:256] generating profile certs ...
	I0531 19:42:37.610490   65042 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/client.key
	I0531 19:42:37.610559   65042 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.key.d10b9b8a
	I0531 19:42:37.610605   65042 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.key
	I0531 19:42:37.610793   65042 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem (1338 bytes)
	W0531 19:42:37.610840   65042 certs.go:480] ignoring /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372_empty.pem, impossibly tiny 0 bytes
	I0531 19:42:37.610853   65042 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca-key.pem (1675 bytes)
	I0531 19:42:37.610893   65042 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/ca.pem (1082 bytes)
	I0531 19:42:37.610924   65042 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/cert.pem (1123 bytes)
	I0531 19:42:37.610959   65042 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/certs/key.pem (1679 bytes)
	I0531 19:42:37.611014   65042 certs.go:484] found cert: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem (1708 bytes)
	I0531 19:42:37.611816   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0531 19:42:37.636595   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0531 19:42:37.660953   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0531 19:42:37.692356   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0531 19:42:37.718442   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0531 19:42:37.747155   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0531 19:42:37.772168   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0531 19:42:37.796031   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/kubernetes-upgrade-343216/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0531 19:42:37.819527   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/ssl/certs/183722.pem --> /usr/share/ca-certificates/183722.pem (1708 bytes)
	I0531 19:42:37.843229   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0531 19:42:37.868396   65042 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18995-11107/.minikube/certs/18372.pem --> /usr/share/ca-certificates/18372.pem (1338 bytes)
	I0531 19:42:37.893731   65042 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0531 19:42:37.910999   65042 ssh_runner.go:195] Run: openssl version
	I0531 19:42:37.917603   65042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/183722.pem && ln -fs /usr/share/ca-certificates/183722.pem /etc/ssl/certs/183722.pem"
	I0531 19:42:37.929526   65042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/183722.pem
	I0531 19:42:37.934811   65042 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 31 18:37 /usr/share/ca-certificates/183722.pem
	I0531 19:42:37.934862   65042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/183722.pem
	I0531 19:42:37.941286   65042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/183722.pem /etc/ssl/certs/3ec20f2e.0"
	I0531 19:42:37.951997   65042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0531 19:42:37.963312   65042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:42:37.968042   65042 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 31 17:57 /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:42:37.968096   65042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0531 19:42:37.973935   65042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0531 19:42:37.983414   65042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18372.pem && ln -fs /usr/share/ca-certificates/18372.pem /etc/ssl/certs/18372.pem"
	I0531 19:42:37.995328   65042 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18372.pem
	I0531 19:42:38.001210   65042 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 31 18:37 /usr/share/ca-certificates/18372.pem
	I0531 19:42:38.001252   65042 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18372.pem
	I0531 19:42:38.008726   65042 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18372.pem /etc/ssl/certs/51391683.0"
	I0531 19:42:38.021549   65042 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0531 19:42:38.027033   65042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0531 19:42:38.032884   65042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0531 19:42:38.038862   65042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0531 19:42:38.044431   65042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0531 19:42:38.049979   65042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0531 19:42:38.055352   65042 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0531 19:42:38.060899   65042 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-343216 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.30.1 ClusterName:kubernetes-upgrade-343216 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.31 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 19:42:38.061013   65042 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0531 19:42:38.061062   65042 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0531 19:42:38.098515   65042 cri.go:89] found id: "52cc6364fdf4631dca65135e1b7b76741776da75e5d9250a49e093fd29d0b573"
	I0531 19:42:38.098542   65042 cri.go:89] found id: "e63e3b72215044c1064929a25db1bdf749e73683ac99f772ec112cc2c653801e"
	I0531 19:42:38.098548   65042 cri.go:89] found id: "66453a004980d52b15bd9d402acf0491f955de87110ba5c3f8ad3f2de0d134ae"
	I0531 19:42:38.098553   65042 cri.go:89] found id: "da84068404fa48b6cf2fe925a3e7c9ea48628a20c9444602bbe3b7b6cab71250"
	I0531 19:42:38.098567   65042 cri.go:89] found id: "a3f9c8fac218f1b70d58797e09d1719cde25a4aa7b2140d416fa79b7d41d6f6f"
	I0531 19:42:38.098572   65042 cri.go:89] found id: "ac7b0cd60f435c0e199d5908b045bcc137ee7d5de98c3b9cc5325672a61fbdcd"
	I0531 19:42:38.098576   65042 cri.go:89] found id: "f1929ac07707d882c03b81a3c16efb4ff0f6e37e05852e8d9fafbdc6c6ff7224"
	I0531 19:42:38.098581   65042 cri.go:89] found id: "d155b20e283f2e7d24e8c2b8b978cf8560da7936288edd3309a0d1a91cbbbdd4"
	I0531 19:42:38.098585   65042 cri.go:89] found id: ""
	I0531 19:42:38.098636   65042 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.320786540Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63c9ff1b-6ae0-4bf9-8c1a-670d8ffddd17 name=/runtime.v1.RuntimeService/Version
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.328279382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4e11fdb6-8e57-410d-a629-e461680a7b73 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.328873311Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717184568328813520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4e11fdb6-8e57-410d-a629-e461680a7b73 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.329493964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8774bf69-d274-441c-86cf-c7b08502973d name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.329661341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8774bf69-d274-441c-86cf-c7b08502973d name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.330238745Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46f7b627433b82587ea731a4cf842226819dbca20b9a21fc2482c975622d363e,PodSandboxId:6505d08fe3acb5d754d1c4adc65233268b61b1b53112b3a73d0d0d0681bff8bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717184565531008369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqv48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0a96ea-d322-4927-b721-15aa1bf76081,},Annotations:map[string]string{io.kubernetes.container.hash: 4c906120,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d79323e1334c6671690b709f19813a15494d7922632b1c756b75cc6962b0d47,PodSandboxId:e98c9434cd52b344fa33271d9240c3e1a5d5baacc17a43af2190d4bbe01ae585,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717184565437294221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sffbq,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b74da612-db3a-4285-adb0-5bb8cc3b003a,},Annotations:map[string]string{io.kubernetes.container.hash: f99a2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b835634ecf8d02372368d2256f600357bf8a6f69a597665e22b381480f643c3,PodSandboxId:1489f8f8e2bd93d94b1faaac3cca58105415b0f57fa513e4af83a2326763f08f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAIN
ER_RUNNING,CreatedAt:1717184564935990173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4434c8bf-b436-4146-9826-539b44243885,},Annotations:map[string]string{io.kubernetes.container.hash: 56b39058,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e829888b0f363315acb5d85a6350cf119cd64c7252fd1a7b79650540c19e103,PodSandboxId:b557b028d0b66d1f89b6f77300739324c166514b3875bf974dd52a904a76b946,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
7184564919915520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9cce32-95e8-42c5-a8f4-b3be0111d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c66d0c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45f49c5354d5b567f13ad76eb446da4410b1bac739ce050d288d13b3a8dda7e,PodSandboxId:383dd6d2bbd4dc3fb3cdf47a7905dc6784d767f2452c044508e5f6e5fec47815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717184561127084189,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0784244a4b0be11be8fb6b0c1211ca,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e607cf7559a4dcf02da40d3ae0e7852c484aad01fe94e4d144d3abe667bff37a,PodSandboxId:955e3a6d4291474529865f34411c162c00f6f2ce371d2ef6fefc8691b624561a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717184561134608
137,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0ab5bc1284d19c8fdc1fd1c915adb3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5a2300a565774c1d398903ff53e3db135dc38e249f64f77ba730909bcb3276,PodSandboxId:ddb45bb571084f2beeff818ed5ec9ab84185f12fd3a95a07992035cf4e87f616,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717184561022
798469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c886e0a5a86fc4720b37bdf6575ba344,},Annotations:map[string]string{io.kubernetes.container.hash: 4bef1a3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57582e1bb86e826c458c1be170d8f624168df3820068f0941d38c2907915d37,PodSandboxId:23035c78ee64f3a37a77f1489921077c4ba3c7b4eb6b388966834a0acee0e4fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717184561009961642,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0df25b900cf05d8075233e6eeda8bd2,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63e3b72215044c1064929a25db1bdf749e73683ac99f772ec112cc2c653801e,PodSandboxId:b0f88a293ff8d86b2dac98fa8be9fdebe06fcedf4801d67e63760684bf14c25f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717184527056468380,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sffbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74da612-db3a-4285-adb0-5bb8cc3b003a,},Annotations:map[string]string{io.kubernetes.container.hash: f99a2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52cc6364fdf4631dca65135e1b7b76741776da75e5d9250a49e093fd29d0b573,PodSandboxId:e063eaf107224b92ac47ef3ed0499e276ecba1de0ba7d37463a59c1b4ab08c46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717184527074743636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqv48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0a96ea-d322-4927-b721-15aa1bf76081,},Annotations:map[string]string{io.kubernetes.container.hash: 4c906120,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66453a004980d52b15bd9d402acf0491f955de87110ba5c3f8ad3f2de0d134ae,PodSandboxId:a108d302b3af1220ffc583d65c43d1e50c7c34a16b7fc6
5a78bcab8f5fcdcafd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717184526599067775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4434c8bf-b436-4146-9826-539b44243885,},Annotations:map[string]string{io.kubernetes.container.hash: 56b39058,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da84068404fa48b6cf2fe925a3e7c9ea48628a20c9444602bbe3b7b6cab71250,PodSandboxId:fc16adb3a84540044e65b1d177fc2a6d16530b22bf54d173a28bbfda54fb18d6,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717184526069908843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9cce32-95e8-42c5-a8f4-b3be0111d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c66d0c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac7b0cd60f435c0e199d5908b045bcc137ee7d5de98c3b9cc5325672a61fbdcd,PodSandboxId:7a6ea727e646f03469aeb1be236fe3a6312435ffb2b710ff6c55f0a87296dec0,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717184507620646331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0784244a4b0be11be8fb6b0c1211ca,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f9c8fac218f1b70d58797e09d1719cde25a4aa7b2140d416fa79b7d41d6f6f,PodSandboxId:1accbf3cedac7a428414e2b551453618bb78288776eeaff33559a0a29bc14d15,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717184507652485362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c886e0a5a86fc4720b37bdf6575ba344,},Annotations:map[string]string{io.kubernetes.container.hash: 4bef1a3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1929ac07707d882c03b81a3c16efb4ff0f6e37e05852e8d9fafbdc6c6ff7224,PodSandboxId:a281220550c04250285c6578927c8d573a1a866157d3dd82dc8c69b56725498e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717184507583233277,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0ab5bc1284d19c8fdc1fd1c915adb3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d155b20e283f2e7d24e8c2b8b978cf8560da7936288edd3309a0d1a91cbbbdd4,PodSandboxId:e400933be9ef2edeeb58cb0821bdb67434eb46a6d336b31087253d28a967e270,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717184507540691782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0df25b900cf05d8075233e6eeda8bd2,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8774bf69-d274-441c-86cf-c7b08502973d name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.379057292Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42ef3b31-08a7-4d21-8fda-9b48b1ee5a27 name=/runtime.v1.RuntimeService/Version
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.379186314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42ef3b31-08a7-4d21-8fda-9b48b1ee5a27 name=/runtime.v1.RuntimeService/Version
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.380876610Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=89477c22-5fdc-45cf-b4f5-9be869fa2694 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.381375748Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717184568381353830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=89477c22-5fdc-45cf-b4f5-9be869fa2694 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.382084088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2560ad9-4822-4d93-acbf-8afdf707bb70 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.382154441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2560ad9-4822-4d93-acbf-8afdf707bb70 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.382598694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46f7b627433b82587ea731a4cf842226819dbca20b9a21fc2482c975622d363e,PodSandboxId:6505d08fe3acb5d754d1c4adc65233268b61b1b53112b3a73d0d0d0681bff8bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717184565531008369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqv48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0a96ea-d322-4927-b721-15aa1bf76081,},Annotations:map[string]string{io.kubernetes.container.hash: 4c906120,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d79323e1334c6671690b709f19813a15494d7922632b1c756b75cc6962b0d47,PodSandboxId:e98c9434cd52b344fa33271d9240c3e1a5d5baacc17a43af2190d4bbe01ae585,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717184565437294221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sffbq,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b74da612-db3a-4285-adb0-5bb8cc3b003a,},Annotations:map[string]string{io.kubernetes.container.hash: f99a2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b835634ecf8d02372368d2256f600357bf8a6f69a597665e22b381480f643c3,PodSandboxId:1489f8f8e2bd93d94b1faaac3cca58105415b0f57fa513e4af83a2326763f08f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAIN
ER_RUNNING,CreatedAt:1717184564935990173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4434c8bf-b436-4146-9826-539b44243885,},Annotations:map[string]string{io.kubernetes.container.hash: 56b39058,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e829888b0f363315acb5d85a6350cf119cd64c7252fd1a7b79650540c19e103,PodSandboxId:b557b028d0b66d1f89b6f77300739324c166514b3875bf974dd52a904a76b946,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
7184564919915520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9cce32-95e8-42c5-a8f4-b3be0111d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c66d0c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45f49c5354d5b567f13ad76eb446da4410b1bac739ce050d288d13b3a8dda7e,PodSandboxId:383dd6d2bbd4dc3fb3cdf47a7905dc6784d767f2452c044508e5f6e5fec47815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717184561127084189,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0784244a4b0be11be8fb6b0c1211ca,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e607cf7559a4dcf02da40d3ae0e7852c484aad01fe94e4d144d3abe667bff37a,PodSandboxId:955e3a6d4291474529865f34411c162c00f6f2ce371d2ef6fefc8691b624561a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717184561134608
137,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0ab5bc1284d19c8fdc1fd1c915adb3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5a2300a565774c1d398903ff53e3db135dc38e249f64f77ba730909bcb3276,PodSandboxId:ddb45bb571084f2beeff818ed5ec9ab84185f12fd3a95a07992035cf4e87f616,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717184561022
798469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c886e0a5a86fc4720b37bdf6575ba344,},Annotations:map[string]string{io.kubernetes.container.hash: 4bef1a3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57582e1bb86e826c458c1be170d8f624168df3820068f0941d38c2907915d37,PodSandboxId:23035c78ee64f3a37a77f1489921077c4ba3c7b4eb6b388966834a0acee0e4fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717184561009961642,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0df25b900cf05d8075233e6eeda8bd2,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63e3b72215044c1064929a25db1bdf749e73683ac99f772ec112cc2c653801e,PodSandboxId:b0f88a293ff8d86b2dac98fa8be9fdebe06fcedf4801d67e63760684bf14c25f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717184527056468380,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sffbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74da612-db3a-4285-adb0-5bb8cc3b003a,},Annotations:map[string]string{io.kubernetes.container.hash: f99a2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52cc6364fdf4631dca65135e1b7b76741776da75e5d9250a49e093fd29d0b573,PodSandboxId:e063eaf107224b92ac47ef3ed0499e276ecba1de0ba7d37463a59c1b4ab08c46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717184527074743636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqv48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0a96ea-d322-4927-b721-15aa1bf76081,},Annotations:map[string]string{io.kubernetes.container.hash: 4c906120,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66453a004980d52b15bd9d402acf0491f955de87110ba5c3f8ad3f2de0d134ae,PodSandboxId:a108d302b3af1220ffc583d65c43d1e50c7c34a16b7fc6
5a78bcab8f5fcdcafd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717184526599067775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4434c8bf-b436-4146-9826-539b44243885,},Annotations:map[string]string{io.kubernetes.container.hash: 56b39058,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da84068404fa48b6cf2fe925a3e7c9ea48628a20c9444602bbe3b7b6cab71250,PodSandboxId:fc16adb3a84540044e65b1d177fc2a6d16530b22bf54d173a28bbfda54fb18d6,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717184526069908843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9cce32-95e8-42c5-a8f4-b3be0111d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c66d0c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac7b0cd60f435c0e199d5908b045bcc137ee7d5de98c3b9cc5325672a61fbdcd,PodSandboxId:7a6ea727e646f03469aeb1be236fe3a6312435ffb2b710ff6c55f0a87296dec0,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717184507620646331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0784244a4b0be11be8fb6b0c1211ca,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f9c8fac218f1b70d58797e09d1719cde25a4aa7b2140d416fa79b7d41d6f6f,PodSandboxId:1accbf3cedac7a428414e2b551453618bb78288776eeaff33559a0a29bc14d15,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717184507652485362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c886e0a5a86fc4720b37bdf6575ba344,},Annotations:map[string]string{io.kubernetes.container.hash: 4bef1a3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1929ac07707d882c03b81a3c16efb4ff0f6e37e05852e8d9fafbdc6c6ff7224,PodSandboxId:a281220550c04250285c6578927c8d573a1a866157d3dd82dc8c69b56725498e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717184507583233277,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0ab5bc1284d19c8fdc1fd1c915adb3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d155b20e283f2e7d24e8c2b8b978cf8560da7936288edd3309a0d1a91cbbbdd4,PodSandboxId:e400933be9ef2edeeb58cb0821bdb67434eb46a6d336b31087253d28a967e270,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717184507540691782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0df25b900cf05d8075233e6eeda8bd2,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2560ad9-4822-4d93-acbf-8afdf707bb70 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.400664750Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d250deb4-48e4-484d-b65e-38ce204ff19f name=/runtime.v1.RuntimeService/ListPodSandbox
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.400882401Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:6505d08fe3acb5d754d1c4adc65233268b61b1b53112b3a73d0d0d0681bff8bf,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-bqv48,Uid:bc0a96ea-d322-4927-b721-15aa1bf76081,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717184564842962059,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqv48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0a96ea-d322-4927-b721-15aa1bf76081,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-31T19:42:44.362887448Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e98c9434cd52b344fa33271d9240c3e1a5d5baacc17a43af2190d4bbe01ae585,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-sffbq,Uid:b74da612-db3a-4285-adb0-5bb8cc3b003a,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717184564841017629,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-sffbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74da612-db3a-4285-adb0-5bb8cc3b003a,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-31T19:42:44.362888653Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b557b028d0b66d1f89b6f77300739324c166514b3875bf974dd52a904a76b946,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ee9cce32-95e8-42c5-a8f4-b3be0111d99a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717184564703447621,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9cce32-95e8-42c5-a8f4-b3be0111d99a,},An
notations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-05-31T19:42:44.362886210Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1489f8f8e2bd93d94b1faaac3cca58105415b0f57fa513e4af83a2326763f08f,Metadata:&PodSandboxMetadata{Name:kube-proxy-npg9m,Uid:4434c8bf-b436-4146-9826-539b44243885,N
amespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717184564701309607,Labels:map[string]string{controller-revision-hash: 5dbf89796d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-npg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4434c8bf-b436-4146-9826-539b44243885,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-05-31T19:42:44.362880454Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:383dd6d2bbd4dc3fb3cdf47a7905dc6784d767f2452c044508e5f6e5fec47815,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-343216,Uid:ca0784244a4b0be11be8fb6b0c1211ca,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717184560857482444,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0784244a4b0be11be8fb6
b0c1211ca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ca0784244a4b0be11be8fb6b0c1211ca,kubernetes.io/config.seen: 2024-05-31T19:42:40.366793065Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:955e3a6d4291474529865f34411c162c00f6f2ce371d2ef6fefc8691b624561a,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-343216,Uid:bb0ab5bc1284d19c8fdc1fd1c915adb3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717184560856879639,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0ab5bc1284d19c8fdc1fd1c915adb3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: bb0ab5bc1284d19c8fdc1fd1c915adb3,kubernetes.io/config.seen: 2024-05-31T19:42:40.366798398Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2
3035c78ee64f3a37a77f1489921077c4ba3c7b4eb6b388966834a0acee0e4fb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-343216,Uid:f0df25b900cf05d8075233e6eeda8bd2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1717184560849311988,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0df25b900cf05d8075233e6eeda8bd2,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.31:8443,kubernetes.io/config.hash: f0df25b900cf05d8075233e6eeda8bd2,kubernetes.io/config.seen: 2024-05-31T19:42:40.366797626Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ddb45bb571084f2beeff818ed5ec9ab84185f12fd3a95a07992035cf4e87f616,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-343216,Uid:c886e0a5a86fc4720b37bdf6575ba344,Namespace:kube-system,Attem
pt:1,},State:SANDBOX_READY,CreatedAt:1717184560841688717,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c886e0a5a86fc4720b37bdf6575ba344,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.31:2379,kubernetes.io/config.hash: c886e0a5a86fc4720b37bdf6575ba344,kubernetes.io/config.seen: 2024-05-31T19:42:40.366796452Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d250deb4-48e4-484d-b65e-38ce204ff19f name=/runtime.v1.RuntimeService/ListPodSandbox
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.401497985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=68570d01-12af-471c-8059-b62ed5811e56 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.401609291Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=68570d01-12af-471c-8059-b62ed5811e56 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.401866759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46f7b627433b82587ea731a4cf842226819dbca20b9a21fc2482c975622d363e,PodSandboxId:6505d08fe3acb5d754d1c4adc65233268b61b1b53112b3a73d0d0d0681bff8bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717184565531008369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqv48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0a96ea-d322-4927-b721-15aa1bf76081,},Annotations:map[string]string{io.kubernetes.container.hash: 4c906120,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d79323e1334c6671690b709f19813a15494d7922632b1c756b75cc6962b0d47,PodSandboxId:e98c9434cd52b344fa33271d9240c3e1a5d5baacc17a43af2190d4bbe01ae585,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717184565437294221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sffbq,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b74da612-db3a-4285-adb0-5bb8cc3b003a,},Annotations:map[string]string{io.kubernetes.container.hash: f99a2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b835634ecf8d02372368d2256f600357bf8a6f69a597665e22b381480f643c3,PodSandboxId:1489f8f8e2bd93d94b1faaac3cca58105415b0f57fa513e4af83a2326763f08f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAIN
ER_RUNNING,CreatedAt:1717184564935990173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4434c8bf-b436-4146-9826-539b44243885,},Annotations:map[string]string{io.kubernetes.container.hash: 56b39058,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e829888b0f363315acb5d85a6350cf119cd64c7252fd1a7b79650540c19e103,PodSandboxId:b557b028d0b66d1f89b6f77300739324c166514b3875bf974dd52a904a76b946,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
7184564919915520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9cce32-95e8-42c5-a8f4-b3be0111d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c66d0c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45f49c5354d5b567f13ad76eb446da4410b1bac739ce050d288d13b3a8dda7e,PodSandboxId:383dd6d2bbd4dc3fb3cdf47a7905dc6784d767f2452c044508e5f6e5fec47815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717184561127084189,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0784244a4b0be11be8fb6b0c1211ca,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e607cf7559a4dcf02da40d3ae0e7852c484aad01fe94e4d144d3abe667bff37a,PodSandboxId:955e3a6d4291474529865f34411c162c00f6f2ce371d2ef6fefc8691b624561a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717184561134608
137,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0ab5bc1284d19c8fdc1fd1c915adb3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5a2300a565774c1d398903ff53e3db135dc38e249f64f77ba730909bcb3276,PodSandboxId:ddb45bb571084f2beeff818ed5ec9ab84185f12fd3a95a07992035cf4e87f616,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717184561022
798469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c886e0a5a86fc4720b37bdf6575ba344,},Annotations:map[string]string{io.kubernetes.container.hash: 4bef1a3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57582e1bb86e826c458c1be170d8f624168df3820068f0941d38c2907915d37,PodSandboxId:23035c78ee64f3a37a77f1489921077c4ba3c7b4eb6b388966834a0acee0e4fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717184561009961642,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0df25b900cf05d8075233e6eeda8bd2,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=68570d01-12af-471c-8059-b62ed5811e56 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.421791570Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=406b7e53-2f50-4940-949b-5c8d7be7999e name=/runtime.v1.RuntimeService/Version
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.421857171Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=406b7e53-2f50-4940-949b-5c8d7be7999e name=/runtime.v1.RuntimeService/Version
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.423502692Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6798f02-aa12-4d0d-868e-714ed768df07 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.424038155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1717184568424003565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124340,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6798f02-aa12-4d0d-868e-714ed768df07 name=/runtime.v1.ImageService/ImageFsInfo
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.424818810Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8327f76f-ff55-4f3f-a928-18580cc8aee5 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.424889639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8327f76f-ff55-4f3f-a928-18580cc8aee5 name=/runtime.v1.RuntimeService/ListContainers
	May 31 19:42:48 kubernetes-upgrade-343216 crio[2279]: time="2024-05-31 19:42:48.425402668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:46f7b627433b82587ea731a4cf842226819dbca20b9a21fc2482c975622d363e,PodSandboxId:6505d08fe3acb5d754d1c4adc65233268b61b1b53112b3a73d0d0d0681bff8bf,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717184565531008369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqv48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0a96ea-d322-4927-b721-15aa1bf76081,},Annotations:map[string]string{io.kubernetes.container.hash: 4c906120,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d79323e1334c6671690b709f19813a15494d7922632b1c756b75cc6962b0d47,PodSandboxId:e98c9434cd52b344fa33271d9240c3e1a5d5baacc17a43af2190d4bbe01ae585,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1717184565437294221,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sffbq,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: b74da612-db3a-4285-adb0-5bb8cc3b003a,},Annotations:map[string]string{io.kubernetes.container.hash: f99a2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b835634ecf8d02372368d2256f600357bf8a6f69a597665e22b381480f643c3,PodSandboxId:1489f8f8e2bd93d94b1faaac3cca58105415b0f57fa513e4af83a2326763f08f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAIN
ER_RUNNING,CreatedAt:1717184564935990173,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4434c8bf-b436-4146-9826-539b44243885,},Annotations:map[string]string{io.kubernetes.container.hash: 56b39058,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e829888b0f363315acb5d85a6350cf119cd64c7252fd1a7b79650540c19e103,PodSandboxId:b557b028d0b66d1f89b6f77300739324c166514b3875bf974dd52a904a76b946,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:171
7184564919915520,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9cce32-95e8-42c5-a8f4-b3be0111d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c66d0c1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e45f49c5354d5b567f13ad76eb446da4410b1bac739ce050d288d13b3a8dda7e,PodSandboxId:383dd6d2bbd4dc3fb3cdf47a7905dc6784d767f2452c044508e5f6e5fec47815,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_RUNNING,CreatedAt:1717184561127084189,
Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0784244a4b0be11be8fb6b0c1211ca,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e607cf7559a4dcf02da40d3ae0e7852c484aad01fe94e4d144d3abe667bff37a,PodSandboxId:955e3a6d4291474529865f34411c162c00f6f2ce371d2ef6fefc8691b624561a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_RUNNING,CreatedAt:1717184561134608
137,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0ab5bc1284d19c8fdc1fd1c915adb3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5a2300a565774c1d398903ff53e3db135dc38e249f64f77ba730909bcb3276,PodSandboxId:ddb45bb571084f2beeff818ed5ec9ab84185f12fd3a95a07992035cf4e87f616,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1717184561022
798469,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c886e0a5a86fc4720b37bdf6575ba344,},Annotations:map[string]string{io.kubernetes.container.hash: 4bef1a3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d57582e1bb86e826c458c1be170d8f624168df3820068f0941d38c2907915d37,PodSandboxId:23035c78ee64f3a37a77f1489921077c4ba3c7b4eb6b388966834a0acee0e4fb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_RUNNING,CreatedAt:1717184561009961642,Labels:map[string]
string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0df25b900cf05d8075233e6eeda8bd2,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e63e3b72215044c1064929a25db1bdf749e73683ac99f772ec112cc2c653801e,PodSandboxId:b0f88a293ff8d86b2dac98fa8be9fdebe06fcedf4801d67e63760684bf14c25f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717184527056468380,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-sffbq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b74da612-db3a-4285-adb0-5bb8cc3b003a,},Annotations:map[string]string{io.kubernetes.container.hash: f99a2529,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52cc6364fdf4631dca65135e1b7b76741776da75e5d9250a49e093fd29d0b573,PodSandboxId:e063eaf107224b92ac47ef3ed0499e276ecba1de0ba7d37463a59c1b4ab08c46,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1717184527074743636,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-bqv48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc0a96ea-d322-4927-b721-15aa1bf76081,},Annotations:map[string]string{io.kubernetes.container.hash: 4c906120,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66453a004980d52b15bd9d402acf0491f955de87110ba5c3f8ad3f2de0d134ae,PodSandboxId:a108d302b3af1220ffc583d65c43d1e50c7c34a16b7fc6
5a78bcab8f5fcdcafd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd,State:CONTAINER_EXITED,CreatedAt:1717184526599067775,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-npg9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4434c8bf-b436-4146-9826-539b44243885,},Annotations:map[string]string{io.kubernetes.container.hash: 56b39058,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da84068404fa48b6cf2fe925a3e7c9ea48628a20c9444602bbe3b7b6cab71250,PodSandboxId:fc16adb3a84540044e65b1d177fc2a6d16530b22bf54d173a28bbfda54fb18d6,Metadata:&Conta
inerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1717184526069908843,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee9cce32-95e8-42c5-a8f4-b3be0111d99a,},Annotations:map[string]string{io.kubernetes.container.hash: 2c66d0c1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac7b0cd60f435c0e199d5908b045bcc137ee7d5de98c3b9cc5325672a61fbdcd,PodSandboxId:7a6ea727e646f03469aeb1be236fe3a6312435ffb2b710ff6c55f0a87296dec0,Metadata:&ContainerMetadata{
Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035,State:CONTAINER_EXITED,CreatedAt:1717184507620646331,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ca0784244a4b0be11be8fb6b0c1211ca,},Annotations:map[string]string{io.kubernetes.container.hash: 200064a4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3f9c8fac218f1b70d58797e09d1719cde25a4aa7b2140d416fa79b7d41d6f6f,PodSandboxId:1accbf3cedac7a428414e2b551453618bb78288776eeaff33559a0a29bc14d15,Metadata:&ContainerMetadata{Name:e
tcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1717184507652485362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c886e0a5a86fc4720b37bdf6575ba344,},Annotations:map[string]string{io.kubernetes.container.hash: 4bef1a3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1929ac07707d882c03b81a3c16efb4ff0f6e37e05852e8d9fafbdc6c6ff7224,PodSandboxId:a281220550c04250285c6578927c8d573a1a866157d3dd82dc8c69b56725498e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},I
mage:&ImageSpec{Image:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c,State:CONTAINER_EXITED,CreatedAt:1717184507583233277,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb0ab5bc1284d19c8fdc1fd1c915adb3,},Annotations:map[string]string{io.kubernetes.container.hash: ac6c6b5e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d155b20e283f2e7d24e8c2b8b978cf8560da7936288edd3309a0d1a91cbbbdd4,PodSandboxId:e400933be9ef2edeeb58cb0821bdb67434eb46a6d336b31087253d28a967e270,Metadata:&ContainerMetadata{Name:kube-apiserver,A
ttempt:0,},Image:&ImageSpec{Image:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a,State:CONTAINER_EXITED,CreatedAt:1717184507540691782,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-343216,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0df25b900cf05d8075233e6eeda8bd2,},Annotations:map[string]string{io.kubernetes.container.hash: eebb16ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8327f76f-ff55-4f3f-a928-18580cc8aee5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	46f7b627433b8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   2 seconds ago        Running             coredns                   1                   6505d08fe3acb       coredns-7db6d8ff4d-bqv48
	6d79323e1334c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago        Running             coredns                   1                   e98c9434cd52b       coredns-7db6d8ff4d-sffbq
	2b835634ecf8d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   3 seconds ago        Running             kube-proxy                1                   1489f8f8e2bd9       kube-proxy-npg9m
	7e829888b0f36       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago        Running             storage-provisioner       1                   b557b028d0b66       storage-provisioner
	e607cf7559a4d       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   7 seconds ago        Running             kube-controller-manager   1                   955e3a6d42914       kube-controller-manager-kubernetes-upgrade-343216
	e45f49c5354d5       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   7 seconds ago        Running             kube-scheduler            1                   383dd6d2bbd4d       kube-scheduler-kubernetes-upgrade-343216
	0a5a2300a5657       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   7 seconds ago        Running             etcd                      1                   ddb45bb571084       etcd-kubernetes-upgrade-343216
	d57582e1bb86e       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   7 seconds ago        Running             kube-apiserver            1                   23035c78ee64f       kube-apiserver-kubernetes-upgrade-343216
	52cc6364fdf46       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   41 seconds ago       Exited              coredns                   0                   e063eaf107224       coredns-7db6d8ff4d-bqv48
	e63e3b7221504       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   41 seconds ago       Exited              coredns                   0                   b0f88a293ff8d       coredns-7db6d8ff4d-sffbq
	66453a004980d       747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd   41 seconds ago       Exited              kube-proxy                0                   a108d302b3af1       kube-proxy-npg9m
	da84068404fa4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   42 seconds ago       Exited              storage-provisioner       0                   fc16adb3a8454       storage-provisioner
	a3f9c8fac218f       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   About a minute ago   Exited              etcd                      0                   1accbf3cedac7       etcd-kubernetes-upgrade-343216
	ac7b0cd60f435       a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035   About a minute ago   Exited              kube-scheduler            0                   7a6ea727e646f       kube-scheduler-kubernetes-upgrade-343216
	f1929ac07707d       25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c   About a minute ago   Exited              kube-controller-manager   0                   a281220550c04       kube-controller-manager-kubernetes-upgrade-343216
	d155b20e283f2       91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a   About a minute ago   Exited              kube-apiserver            0                   e400933be9ef2       kube-apiserver-kubernetes-upgrade-343216
	
	
	==> coredns [46f7b627433b82587ea731a4cf842226819dbca20b9a21fc2482c975622d363e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [52cc6364fdf4631dca65135e1b7b76741776da75e5d9250a49e093fd29d0b573] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1335272047]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:42:07.317) (total time: 16620ms):
	Trace[1335272047]: [16.620278806s] [16.620278806s] END
	[INFO] plugin/kubernetes: Trace[2120041674]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:42:07.316) (total time: 16621ms):
	Trace[2120041674]: [16.621224492s] [16.621224492s] END
	[INFO] plugin/kubernetes: Trace[1786350061]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:42:07.318) (total time: 16619ms):
	Trace[1786350061]: [16.619218492s] [16.619218492s] END
	
	
	==> coredns [6d79323e1334c6671690b709f19813a15494d7922632b1c756b75cc6962b0d47] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e63e3b72215044c1064929a25db1bdf749e73683ac99f772ec112cc2c653801e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/kubernetes: Trace[1474789961]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:42:07.320) (total time: 16616ms):
	Trace[1474789961]: [16.616785872s] [16.616785872s] END
	[INFO] plugin/kubernetes: Trace[1427989288]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:42:07.319) (total time: 16618ms):
	Trace[1427989288]: [16.618388294s] [16.618388294s] END
	[INFO] plugin/kubernetes: Trace[1508115030]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (31-May-2024 19:42:07.318) (total time: 16618ms):
	Trace[1508115030]: [16.618771791s] [16.618771791s] END
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-343216
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-343216
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 31 May 2024 19:41:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-343216
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 31 May 2024 19:42:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 31 May 2024 19:42:44 +0000   Fri, 31 May 2024 19:41:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 31 May 2024 19:42:44 +0000   Fri, 31 May 2024 19:41:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 31 May 2024 19:42:44 +0000   Fri, 31 May 2024 19:41:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 31 May 2024 19:42:44 +0000   Fri, 31 May 2024 19:41:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.31
	  Hostname:    kubernetes-upgrade-343216
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 46adc11965e044fb8254da00485006b5
	  System UUID:                46adc119-65e0-44fb-8254-da00485006b5
	  Boot ID:                    fb5f322d-beb4-4b3d-8e12-7d4a869031a4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-bqv48                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     42s
	  kube-system                 coredns-7db6d8ff4d-sffbq                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     42s
	  kube-system                 etcd-kubernetes-upgrade-343216                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         53s
	  kube-system                 kube-apiserver-kubernetes-upgrade-343216             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-343216    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-proxy-npg9m                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-scheduler-kubernetes-upgrade-343216             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  61s (x8 over 62s)  kubelet          Node kubernetes-upgrade-343216 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 62s)  kubelet          Node kubernetes-upgrade-343216 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x7 over 62s)  kubelet          Node kubernetes-upgrade-343216 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node kubernetes-upgrade-343216 event: Registered Node kubernetes-upgrade-343216 in Controller
	
	
	==> dmesg <==
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.246951] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.087347] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.085920] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.187109] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.174701] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.348097] systemd-fstab-generator[637]: Ignoring "noauto" option for root device
	[  +4.480339] systemd-fstab-generator[736]: Ignoring "noauto" option for root device
	[  +0.067339] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.015025] systemd-fstab-generator[858]: Ignoring "noauto" option for root device
	[ +11.459419] systemd-fstab-generator[1257]: Ignoring "noauto" option for root device
	[  +0.082902] kauditd_printk_skb: 97 callbacks suppressed
	[May31 19:42] kauditd_printk_skb: 21 callbacks suppressed
	[ +24.559431] systemd-fstab-generator[2200]: Ignoring "noauto" option for root device
	[  +0.088723] kauditd_printk_skb: 76 callbacks suppressed
	[  +0.063090] systemd-fstab-generator[2212]: Ignoring "noauto" option for root device
	[  +0.191790] systemd-fstab-generator[2226]: Ignoring "noauto" option for root device
	[  +0.154119] systemd-fstab-generator[2238]: Ignoring "noauto" option for root device
	[  +0.329351] systemd-fstab-generator[2266]: Ignoring "noauto" option for root device
	[  +6.105066] systemd-fstab-generator[2418]: Ignoring "noauto" option for root device
	[  +0.080715] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.545248] systemd-fstab-generator[2544]: Ignoring "noauto" option for root device
	[  +4.594009] kauditd_printk_skb: 74 callbacks suppressed
	[  +1.796410] systemd-fstab-generator[3450]: Ignoring "noauto" option for root device
	
	
	==> etcd [0a5a2300a565774c1d398903ff53e3db135dc38e249f64f77ba730909bcb3276] <==
	{"level":"info","ts":"2024-05-31T19:42:41.418406Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f809dd90516adee6","initial-advertise-peer-urls":["https://192.168.50.31:2380"],"listen-peer-urls":["https://192.168.50.31:2380"],"advertise-client-urls":["https://192.168.50.31:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.31:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-31T19:42:41.418472Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-31T19:42:41.418366Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:42:41.418648Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:42:41.41868Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-31T19:42:41.418717Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.31:2380"}
	{"level":"info","ts":"2024-05-31T19:42:41.418744Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.31:2380"}
	{"level":"info","ts":"2024-05-31T19:42:41.419085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 switched to configuration voters=(17873060208107511526)"}
	{"level":"info","ts":"2024-05-31T19:42:41.421034Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"320a05faed2c1128","local-member-id":"f809dd90516adee6","added-peer-id":"f809dd90516adee6","added-peer-peer-urls":["https://192.168.50.31:2380"]}
	{"level":"info","ts":"2024-05-31T19:42:41.423828Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"320a05faed2c1128","local-member-id":"f809dd90516adee6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:42:41.423899Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:42:42.560613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-31T19:42:42.560744Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-31T19:42:42.560827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 received MsgPreVoteResp from f809dd90516adee6 at term 2"}
	{"level":"info","ts":"2024-05-31T19:42:42.560874Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 became candidate at term 3"}
	{"level":"info","ts":"2024-05-31T19:42:42.560907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 received MsgVoteResp from f809dd90516adee6 at term 3"}
	{"level":"info","ts":"2024-05-31T19:42:42.560983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 became leader at term 3"}
	{"level":"info","ts":"2024-05-31T19:42:42.561019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f809dd90516adee6 elected leader f809dd90516adee6 at term 3"}
	{"level":"info","ts":"2024-05-31T19:42:42.565027Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f809dd90516adee6","local-member-attributes":"{Name:kubernetes-upgrade-343216 ClientURLs:[https://192.168.50.31:2379]}","request-path":"/0/members/f809dd90516adee6/attributes","cluster-id":"320a05faed2c1128","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-31T19:42:42.565307Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-31T19:42:42.565577Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-31T19:42:42.567578Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-31T19:42:42.567615Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-31T19:42:42.56845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-31T19:42:42.571224Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.31:2379"}
	
	
	==> etcd [a3f9c8fac218f1b70d58797e09d1719cde25a4aa7b2140d416fa79b7d41d6f6f] <==
	{"level":"info","ts":"2024-05-31T19:41:48.105758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 became candidate at term 2"}
	{"level":"info","ts":"2024-05-31T19:41:48.105841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 received MsgVoteResp from f809dd90516adee6 at term 2"}
	{"level":"info","ts":"2024-05-31T19:41:48.105868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f809dd90516adee6 became leader at term 2"}
	{"level":"info","ts":"2024-05-31T19:41:48.105961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f809dd90516adee6 elected leader f809dd90516adee6 at term 2"}
	{"level":"info","ts":"2024-05-31T19:41:48.113794Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"f809dd90516adee6","local-member-attributes":"{Name:kubernetes-upgrade-343216 ClientURLs:[https://192.168.50.31:2379]}","request-path":"/0/members/f809dd90516adee6/attributes","cluster-id":"320a05faed2c1128","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-31T19:41:48.113927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-31T19:41:48.114071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-31T19:41:48.125934Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-31T19:41:48.126104Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-31T19:41:48.126213Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-31T19:41:48.114282Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:41:48.130129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.31:2379"}
	{"level":"info","ts":"2024-05-31T19:41:48.15286Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"320a05faed2c1128","local-member-id":"f809dd90516adee6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:41:48.152971Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:41:48.153016Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-31T19:42:23.947006Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-31T19:42:23.947113Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"kubernetes-upgrade-343216","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.31:2380"],"advertise-client-urls":["https://192.168.50.31:2379"]}
	{"level":"warn","ts":"2024-05-31T19:42:23.947935Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:42:23.948082Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:42:24.039434Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.31:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-31T19:42:24.039615Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.31:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-31T19:42:24.039718Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f809dd90516adee6","current-leader-member-id":"f809dd90516adee6"}
	{"level":"info","ts":"2024-05-31T19:42:24.042462Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.31:2380"}
	{"level":"info","ts":"2024-05-31T19:42:24.042753Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.31:2380"}
	{"level":"info","ts":"2024-05-31T19:42:24.0428Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"kubernetes-upgrade-343216","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.31:2380"],"advertise-client-urls":["https://192.168.50.31:2379"]}
	
	
	==> kernel <==
	 19:42:48 up 1 min,  0 users,  load average: 0.80, 0.27, 0.10
	Linux kubernetes-upgrade-343216 5.10.207 #1 SMP Wed May 22 22:17:16 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [d155b20e283f2e7d24e8c2b8b978cf8560da7936288edd3309a0d1a91cbbbdd4] <==
	I0531 19:41:51.120635       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0531 19:41:51.120711       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:41:51.699491       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:41:51.751640       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0531 19:41:51.869883       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0531 19:41:51.879421       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.31]
	I0531 19:41:51.880386       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 19:41:51.884631       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0531 19:41:52.163905       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0531 19:41:57.368424       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0531 19:41:57.388766       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0531 19:41:57.410947       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0531 19:42:05.769941       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0531 19:42:06.173697       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0531 19:42:23.933000       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0531 19:42:23.966888       1 watcher.go:342] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0531 19:42:23.971470       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:42:23.971656       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:42:23.971876       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:42:23.972008       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:42:23.972118       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:42:23.972228       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:42:23.972346       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:42:23.972467       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0531 19:42:23.972629       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d57582e1bb86e826c458c1be170d8f624168df3820068f0941d38c2907915d37] <==
	I0531 19:42:43.990937       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0531 19:42:43.991036       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0531 19:42:44.056334       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0531 19:42:44.056625       1 aggregator.go:165] initial CRD sync complete...
	I0531 19:42:44.056839       1 autoregister_controller.go:141] Starting autoregister controller
	I0531 19:42:44.056872       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0531 19:42:44.058847       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0531 19:42:44.058888       1 policy_source.go:224] refreshing policies
	I0531 19:42:44.118017       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0531 19:42:44.149156       1 shared_informer.go:320] Caches are synced for configmaps
	I0531 19:42:44.149648       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0531 19:42:44.149688       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0531 19:42:44.150008       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0531 19:42:44.150062       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0531 19:42:44.156051       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0531 19:42:44.156153       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0531 19:42:44.156211       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0531 19:42:44.157872       1 cache.go:39] Caches are synced for autoregister controller
	I0531 19:42:44.957295       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0531 19:42:45.216735       1 controller.go:615] quota admission added evaluator for: endpoints
	I0531 19:42:46.211945       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0531 19:42:46.228237       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0531 19:42:46.266849       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0531 19:42:46.394749       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0531 19:42:46.404484       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [e607cf7559a4dcf02da40d3ae0e7852c484aad01fe94e4d144d3abe667bff37a] <==
	I0531 19:42:46.144696       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0531 19:42:46.144730       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0531 19:42:46.147164       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0531 19:42:46.147390       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0531 19:42:46.147777       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0531 19:42:46.148162       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0531 19:42:46.148188       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0531 19:42:46.148371       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0531 19:42:46.150674       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0531 19:42:46.150789       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0531 19:42:46.150818       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0531 19:42:46.150847       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0531 19:42:46.150869       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0531 19:42:46.151272       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0531 19:42:46.151316       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0531 19:42:46.151340       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0531 19:42:46.151364       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0531 19:42:46.151412       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0531 19:42:46.151442       1 controllermanager.go:761] "Started controller" controller="resourcequota-controller"
	I0531 19:42:46.156391       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0531 19:42:46.156412       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0531 19:42:46.162303       1 controllermanager.go:761] "Started controller" controller="job-controller"
	I0531 19:42:46.162514       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0531 19:42:46.162624       1 shared_informer.go:313] Waiting for caches to sync for job
	I0531 19:42:46.162675       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	
	
	==> kube-controller-manager [f1929ac07707d882c03b81a3c16efb4ff0f6e37e05852e8d9fafbdc6c6ff7224] <==
	I0531 19:42:05.561909       1 shared_informer.go:320] Caches are synced for disruption
	I0531 19:42:05.568000       1 shared_informer.go:320] Caches are synced for GC
	I0531 19:42:05.573807       1 shared_informer.go:320] Caches are synced for resource quota
	I0531 19:42:05.574065       1 shared_informer.go:320] Caches are synced for node
	I0531 19:42:05.574144       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0531 19:42:05.574198       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0531 19:42:05.574230       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0531 19:42:05.574284       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0531 19:42:05.583902       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="kubernetes-upgrade-343216" podCIDRs=["10.244.0.0/24"]
	I0531 19:42:05.602261       1 shared_informer.go:320] Caches are synced for taint
	I0531 19:42:05.602619       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0531 19:42:05.602855       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-343216"
	I0531 19:42:05.604045       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0531 19:42:05.613147       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0531 19:42:05.618345       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0531 19:42:05.618867       1 shared_informer.go:320] Caches are synced for attach detach
	I0531 19:42:06.002507       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 19:42:06.061993       1 shared_informer.go:320] Caches are synced for garbage collector
	I0531 19:42:06.062032       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0531 19:42:06.315420       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="538.583415ms"
	I0531 19:42:06.326622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="11.151656ms"
	I0531 19:42:06.327040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="138.075µs"
	I0531 19:42:06.333408       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.461µs"
	I0531 19:42:07.496022       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.762µs"
	I0531 19:42:07.548874       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="49.339µs"
	
	
	==> kube-proxy [2b835634ecf8d02372368d2256f600357bf8a6f69a597665e22b381480f643c3] <==
	I0531 19:42:45.285852       1 server_linux.go:69] "Using iptables proxy"
	I0531 19:42:45.323078       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.31"]
	I0531 19:42:45.441266       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0531 19:42:45.441345       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0531 19:42:45.441360       1 server_linux.go:165] "Using iptables Proxier"
	I0531 19:42:45.449899       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:42:45.450928       1 server.go:872] "Version info" version="v1.30.1"
	I0531 19:42:45.450944       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:42:45.452895       1 config.go:192] "Starting service config controller"
	I0531 19:42:45.452909       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0531 19:42:45.452924       1 config.go:101] "Starting endpoint slice config controller"
	I0531 19:42:45.452927       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0531 19:42:45.453202       1 config.go:319] "Starting node config controller"
	I0531 19:42:45.453207       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0531 19:42:45.554079       1 shared_informer.go:320] Caches are synced for node config
	I0531 19:42:45.554120       1 shared_informer.go:320] Caches are synced for service config
	I0531 19:42:45.554155       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [66453a004980d52b15bd9d402acf0491f955de87110ba5c3f8ad3f2de0d134ae] <==
	I0531 19:42:06.866633       1 server_linux.go:69] "Using iptables proxy"
	I0531 19:42:06.886150       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.50.31"]
	I0531 19:42:07.015835       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0531 19:42:07.015964       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0531 19:42:07.016037       1 server_linux.go:165] "Using iptables Proxier"
	I0531 19:42:07.028876       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0531 19:42:07.029074       1 server.go:872] "Version info" version="v1.30.1"
	I0531 19:42:07.029092       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0531 19:42:07.030915       1 config.go:192] "Starting service config controller"
	I0531 19:42:07.030936       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0531 19:42:07.030974       1 config.go:101] "Starting endpoint slice config controller"
	I0531 19:42:07.030978       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0531 19:42:07.031491       1 config.go:319] "Starting node config controller"
	I0531 19:42:07.031498       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0531 19:42:07.131661       1 shared_informer.go:320] Caches are synced for node config
	I0531 19:42:07.131700       1 shared_informer.go:320] Caches are synced for service config
	I0531 19:42:07.131765       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [ac7b0cd60f435c0e199d5908b045bcc137ee7d5de98c3b9cc5325672a61fbdcd] <==
	E0531 19:41:50.291875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 19:41:51.131458       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 19:41:51.131623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 19:41:51.144262       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 19:41:51.144358       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 19:41:51.145700       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:41:51.145758       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:41:51.219679       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:41:51.219780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 19:41:51.222354       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:51.222415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 19:41:51.272225       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:41:51.272286       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 19:41:51.284867       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:51.285469       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0531 19:41:51.434939       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:41:51.435177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 19:41:51.480152       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0531 19:41:51.480204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0531 19:41:51.489093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:41:51.489217       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 19:41:51.554363       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 19:41:51.554472       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 19:41:54.577264       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0531 19:42:23.934851       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e45f49c5354d5b567f13ad76eb446da4410b1bac739ce050d288d13b3a8dda7e] <==
	W0531 19:42:44.052476       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0531 19:42:44.052520       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0531 19:42:44.052729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0531 19:42:44.052772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0531 19:42:44.052837       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0531 19:42:44.052868       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0531 19:42:44.052934       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0531 19:42:44.052963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0531 19:42:44.053179       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0531 19:42:44.053617       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0531 19:42:44.053291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0531 19:42:44.053333       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0531 19:42:44.053421       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:42:44.053450       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0531 19:42:44.053484       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0531 19:42:44.053570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:42:44.055335       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0531 19:42:44.055360       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0531 19:42:44.055409       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0531 19:42:44.055418       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0531 19:42:44.055427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0531 19:42:44.055499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0531 19:42:44.065221       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0531 19:42:44.065321       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0531 19:42:45.256922       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 31 19:42:40 kubernetes-upgrade-343216 kubelet[2551]: E0531 19:42:40.678188    2551 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.31:8443: connect: connection refused" node="kubernetes-upgrade-343216"
	May 31 19:42:40 kubernetes-upgrade-343216 kubelet[2551]: E0531 19:42:40.996428    2551 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-343216?timeout=10s\": dial tcp 192.168.50.31:8443: connect: connection refused" interval="800ms"
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:41.086498    2551 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-343216"
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: E0531 19:42:41.087940    2551 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.31:8443: connect: connection refused" node="kubernetes-upgrade-343216"
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: W0531 19:42:41.233753    2551 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.31:8443: connect: connection refused
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: E0531 19:42:41.233820    2551 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.31:8443: connect: connection refused
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: W0531 19:42:41.416035    2551 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.31:8443: connect: connection refused
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: E0531 19:42:41.416118    2551 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.31:8443: connect: connection refused
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: W0531 19:42:41.453620    2551 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.31:8443: connect: connection refused
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: E0531 19:42:41.453697    2551 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.50.31:8443: connect: connection refused
	May 31 19:42:41 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:41.889812    2551 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-343216"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.106504    2551 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-343216"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.106682    2551 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-343216"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.108228    2551 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.109190    2551 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.359795    2551 apiserver.go:52] "Watching apiserver"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.363127    2551 topology_manager.go:215] "Topology Admit Handler" podUID="ee9cce32-95e8-42c5-a8f4-b3be0111d99a" podNamespace="kube-system" podName="storage-provisioner"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.364417    2551 topology_manager.go:215] "Topology Admit Handler" podUID="4434c8bf-b436-4146-9826-539b44243885" podNamespace="kube-system" podName="kube-proxy-npg9m"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.365416    2551 topology_manager.go:215] "Topology Admit Handler" podUID="bc0a96ea-d322-4927-b721-15aa1bf76081" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bqv48"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.365963    2551 topology_manager.go:215] "Topology Admit Handler" podUID="b74da612-db3a-4285-adb0-5bb8cc3b003a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sffbq"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.374871    2551 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.414059    2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ee9cce32-95e8-42c5-a8f4-b3be0111d99a-tmp\") pod \"storage-provisioner\" (UID: \"ee9cce32-95e8-42c5-a8f4-b3be0111d99a\") " pod="kube-system/storage-provisioner"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.414111    2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4434c8bf-b436-4146-9826-539b44243885-lib-modules\") pod \"kube-proxy-npg9m\" (UID: \"4434c8bf-b436-4146-9826-539b44243885\") " pod="kube-system/kube-proxy-npg9m"
	May 31 19:42:44 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:44.414140    2551 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4434c8bf-b436-4146-9826-539b44243885-xtables-lock\") pod \"kube-proxy-npg9m\" (UID: \"4434c8bf-b436-4146-9826-539b44243885\") " pod="kube-system/kube-proxy-npg9m"
	May 31 19:42:47 kubernetes-upgrade-343216 kubelet[2551]: I0531 19:42:47.564213    2551 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [7e829888b0f363315acb5d85a6350cf119cd64c7252fd1a7b79650540c19e103] <==
	I0531 19:42:45.132802       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0531 19:42:45.200028       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0531 19:42:45.200248       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0531 19:42:45.255696       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0531 19:42:45.255904       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-343216_7a9d2d69-52ea-4294-805b-e24c0c5dc662!
	I0531 19:42:45.256022       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f442f4b-566f-482b-8e57-dc304648b434", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-343216_7a9d2d69-52ea-4294-805b-e24c0c5dc662 became leader
	I0531 19:42:45.356640       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-343216_7a9d2d69-52ea-4294-805b-e24c0c5dc662!
	
	
	==> storage-provisioner [da84068404fa48b6cf2fe925a3e7c9ea48628a20c9444602bbe3b7b6cab71250] <==
	I0531 19:42:06.181655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0531 19:42:47.861531   65686 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18995-11107/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-343216 -n kubernetes-upgrade-343216
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-343216 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-343216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-343216
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-343216: (1.083180168s)
--- FAIL: TestKubernetesUpgrade (430.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7200.047s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-720519 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.30.1
E0531 19:49:58.916786   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 19:51:21.968530   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 19:54:58.916314   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (16m23s)
	TestStartStop (18m38s)
	TestStartStop/group/default-k8s-diff-port (11m7s)
	TestStartStop/group/default-k8s-diff-port/serial (11m7s)
	TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7m22s)
	TestStartStop/group/embed-certs (12m35s)
	TestStartStop/group/embed-certs/serial (12m35s)
	TestStartStop/group/embed-certs/serial/SecondStart (8m7s)
	TestStartStop/group/no-preload (13m24s)
	TestStartStop/group/no-preload/serial (13m24s)
	TestStartStop/group/no-preload/serial/SecondStart (8m20s)
	TestStartStop/group/old-k8s-version (13m45s)
	TestStartStop/group/old-k8s-version/serial (13m45s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (7m26s)

                                                
                                                
goroutine 2540 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 12 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00003eb60, 0xc000c17bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006a6558, {0x49e90e0, 0x2b, 0x2b}, {0x26aa689?, 0xc000ad4900?, 0x4aa5860?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc00054f180)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc00054f180)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00050dd80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 709 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b80b00, 0xc0016fca80)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 708
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2448 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d84a0, 0xc0000602a0}, 0xc001395f50, 0xc001395f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d84a0, 0xc0000602a0}, 0x0?, 0xc001395f50, 0xc001395f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d84a0?, 0xc0000602a0?}, 0xc001896ea0?, 0xc00050c900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001395fd0?, 0x9aba85?, 0xc001772f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2454
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2555 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc00149a6e0, 0xc0008178c0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2552
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2492 [IO wait]:
internal/poll.runtime_pollWait(0x7f19281ddfe0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0014c03c0?, 0xc001679a1f?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014c03c0, {0xc001679a1f, 0x1e5e1, 0x1e5e1})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000886130, {0xc001679a1f?, 0xc001398d30?, 0x1fe0a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ed02a0, {0x36b2fe0, 0xc001f2e098})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b3120, 0xc001ed02a0}, {0x36b2fe0, 0xc001f2e098}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000886130?, {0x36b3120, 0xc001ed02a0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000886130, {0x36b3120, 0xc001ed02a0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b3120, 0xc001ed02a0}, {0x36b3040, 0xc000886130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000bee7e0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2490
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2375 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d84a0, 0xc0000602a0}, 0xc00140f750, 0xc0014aef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d84a0, 0xc0000602a0}, 0x40?, 0xc00140f750, 0xc00140f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d84a0?, 0xc0000602a0?}, 0xc000bd6d00?, 0x551c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00140f7d0?, 0x593064?, 0xc000816e40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2357
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 99 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 98
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 2528 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f192869b258, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d62e40?, 0xc00188c388?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d62e40, {0xc00188c388, 0x478, 0x478})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009a9b8, {0xc00188c388?, 0x2199da0?, 0x230?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001bda420, {0x36b2fe0, 0xc001f2e150})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b3120, 0xc001bda420}, {0x36b2fe0, 0xc001f2e150}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00009a9b8?, {0x36b3120, 0xc001bda420})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00009a9b8, {0x36b3120, 0xc001bda420})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b3120, 0xc001bda420}, {0x36b3040, 0xc00009a9b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00050c980?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2527
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2287 [chan receive, 8 minutes]:
testing.(*T).Run(0xc0019849c0, {0x265d223?, 0x60400000004?}, 0xc000822280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0019849c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0019849c0, 0xc000752a80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1873
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2554 [IO wait]:
internal/poll.runtime_pollWait(0x7f192869ac88, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0014c1380?, 0xc002095c52?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014c1380, {0xc002095c52, 0x383ae, 0x383ae})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000886758, {0xc002095c52?, 0xc000093d04?, 0x3fe7c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ed0a20, {0x36b2fe0, 0xc001f2e1c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b3120, 0xc001ed0a20}, {0x36b2fe0, 0xc001f2e1c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000886758?, {0x36b3120, 0xc001ed0a20})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000886758, {0x36b3120, 0xc001ed0a20})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b3120, 0xc001ed0a20}, {0x36b3040, 0xc000886758}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001f506c0?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2552
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2357 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000812540, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2370
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1872 [chan receive, 18 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0014b2000, 0x315a870)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1724
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 390 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013caa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 296
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1687 [chan receive, 16 minutes]:
testing.(*T).Run(0xc00003e4e0, {0x264fece?, 0x55149c?}, 0xc001994048)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00003e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00003e4e0, 0x315a650)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2315 [chan receive, 8 minutes]:
testing.(*T).Run(0xc00003f1e0, {0x265d223?, 0x60400000004?}, 0xc000822000)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00003f1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00003f1e0, 0xc00098a100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1972
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 855 [select, 75 minutes]:
net/http.(*persistConn).writeLoop(0xc001934c60)
	/usr/local/go/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 852
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2447 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001c2c490, 0x2)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f200?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013caae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001c2c4c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006f4150, {0x36b4540, 0xc00204a000}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006f4150, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2454
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 221 [IO wait, 78 minutes]:
internal/poll.runtime_pollWait(0x7f192869b730, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000822380)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000822380)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc000783360)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000783360)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0001620f0, {0x36cb2c0, 0xc000783360})
	/usr/local/go/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0001620f0)
	/usr/local/go/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00070d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 218
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 403 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36d84a0, 0xc0000602a0}, 0xc00140b750, 0xc0014b9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36d84a0, 0xc0000602a0}, 0x40?, 0xc00140b750, 0xc00140b798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36d84a0?, 0xc0000602a0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00140b7d0?, 0x593064?, 0xc000897740?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 391
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2539 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc000992c60, 0xc0015270e0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2536
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 391 [chan receive, 76 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000c9a880, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 296
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 506 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b80160, 0xc0019ca6c0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 505
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1970 [chan receive, 18 minutes]:
testing.(*testContext).waitParallel(0xc0008c67d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014b2340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014b2340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014b2340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0014b2340, 0xc000c9a100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1872
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 854 [select, 75 minutes]:
net/http.(*persistConn).readLoop(0xc001934c60)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 852
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1971 [chan receive, 12 minutes]:
testing.(*T).Run(0xc0014b24e0, {0x265146a?, 0x0?}, 0xc000753500)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014b24e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0014b24e0, 0xc000c9a140)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1872
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 747 [chan send, 75 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b81ce0, 0xc0016fd3e0)
	/usr/local/go/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 356
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2491 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f192869af70, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0014c0300?, 0xc0008baad4?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014c0300, {0xc0008baad4, 0x52c, 0x52c})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000886110, {0xc0008baad4?, 0x2199da0?, 0x229?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ed0240, {0x36b2fe0, 0xc001f2e080})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b3120, 0xc001ed0240}, {0x36b2fe0, 0xc001f2e080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000886110?, {0x36b3120, 0xc001ed0240})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000886110, {0x36b3120, 0xc001ed0240})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b3120, 0xc001ed0240}, {0x36b3040, 0xc000886110}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000822000?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2490
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2538 [IO wait]:
internal/poll.runtime_pollWait(0x7f192869ad80, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d637a0?, 0xc001634329?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d637a0, {0xc001634329, 0x3cd7, 0x3cd7})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009abd0, {0xc001634329?, 0xc000095d30?, 0x10000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001bda9c0, {0x36b2fe0, 0xc001f2e1d8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b3120, 0xc001bda9c0}, {0x36b2fe0, 0xc001f2e1d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00009abd0?, {0x36b3120, 0xc001bda9c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00009abd0, {0x36b3120, 0xc001bda9c0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b3120, 0xc001bda9c0}, {0x36b3040, 0xc00009abd0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0019ca420?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2536
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2376 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2375
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 402 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000c9a850, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f200?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013ca960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000c9a880)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00006b100, {0x36b4540, 0xc00155b860}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00006b100, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 391
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 404 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 403
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2374 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000812510, 0x2)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x213f200?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001b0eae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000812540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0018e8990, {0x36b4540, 0xc001a70630}, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0018e8990, 0x3b9aca00, 0x0, 0x1, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2357
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2553 [IO wait, 4 minutes]:
internal/poll.runtime_pollWait(0x7f192869a8a8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0014c12c0?, 0xc0017a72dd?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0014c12c0, {0xc0017a72dd, 0x523, 0x523})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000886720, {0xc0017a72dd?, 0x2199da0?, 0x208?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ed09f0, {0x36b2fe0, 0xc0008c81e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b3120, 0xc001ed09f0}, {0x36b2fe0, 0xc0008c81e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000886720?, {0x36b3120, 0xc001ed09f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000886720, {0x36b3120, 0xc001ed09f0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b3120, 0xc001ed09f0}, {0x36b3040, 0xc000886720}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000822280?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2552
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2530 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc000992840, 0xc0015268a0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2527
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2064 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0008c67d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001e7e9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001e7e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001e7e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001e7e9c0, 0xc00050c480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1994
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2493 [select, 8 minutes]:
os/exec.(*Cmd).watchCtx(0xc00149a000, 0xc000816fc0)
	/usr/local/go/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2490
	/usr/local/go/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 1994 [chan receive, 16 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0014b2d00, 0xc001994048)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1687
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2065 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0008c67d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001e7ed00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001e7ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001e7ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001e7ed00, 0xc00050c580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1994
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2552 [syscall, 8 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x10bf5, 0xc0008a2ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001ccab40)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001ccab40)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00149a6e0)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00149a6e0)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc001985040, 0xc00149a6e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d82e0, 0xc00041c0e0}, 0xc001985040, {0xc001992318, 0x16}, {0x0?, 0xc001395760?}, {0x551353?, 0x4a16cf?}, {0xc000207200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001985040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001985040, 0xc000822280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2287
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2341 [chan receive, 8 minutes]:
testing.(*T).Run(0xc00003ed00, {0x265d223?, 0x60400000004?}, 0xc00050c980)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00003ed00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00003ed00, 0xc000752e00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1974
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2152 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0008c67d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001984680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001984680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001984680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001984680, 0xc000822600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1994
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2537 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7f192869b068, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d636e0?, 0xc00188cac9?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d636e0, {0xc00188cac9, 0x537, 0x537})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009abb8, {0xc00188cac9?, 0x7f196f967a68?, 0x213?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001bda990, {0x36b2fe0, 0xc0008867a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b3120, 0xc001bda990}, {0x36b2fe0, 0xc0008867a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00009abb8?, {0x36b3120, 0xc001bda990})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00009abb8, {0x36b3120, 0xc001bda990})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b3120, 0xc001bda990}, {0x36b3040, 0xc00009abb8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc00050ca80?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2536
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 2536 [syscall, 8 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x10c46, 0xc00089dab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001e50bd0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001e50bd0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000992c60)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000992c60)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000bd7a00, 0xc000992c60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d82e0, 0xc00046a1c0}, 0xc000bd7a00, {0xc001be6220, 0x1c}, {0x0?, 0xc001a05f60?}, {0x551353?, 0x4a16cf?}, {0xc000824a00, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000bd7a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000bd7a00, 0xc00050ca80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2349
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2490 [syscall, 8 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x109cc, 0xc0008a1ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001cca540)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001cca540)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00149a000)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc00149a000)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc001984d00, 0xc00149a000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d82e0, 0xc00046e0e0}, 0xc001984d00, {0xc000c1ef90, 0x11}, {0x0?, 0xc001394f60?}, {0x551353?, 0x4a16cf?}, {0xc000824000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001984d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001984d00, 0xc000822000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2315
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2349 [chan receive, 8 minutes]:
testing.(*T).Run(0xc001984b60, {0x265d223?, 0x60400000004?}, 0xc00050ca80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001984b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001984b60, 0xc000753500)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1971
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2062 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0008c67d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001e7e340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001e7e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001e7e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001e7e340, 0xc00050c000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1994
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1972 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0014b2680, {0x265146a?, 0x0?}, 0xc00098a100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014b2680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0014b2680, 0xc000c9a180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1872
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2527 [syscall, 8 minutes]:
syscall.Syscall6(0xf7, 0x1, 0x10a57, 0xc000af1ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001e50660)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001e50660)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000992840)
	/usr/local/go/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc000992840)
	/usr/local/go/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000bd7860, 0xc000992840)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36d82e0, 0xc00046a070}, 0xc000bd7860, {0xc001524048, 0x12}, {0x0?, 0xc001398f60?}, {0x551353?, 0x4a16cf?}, {0xc000824200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000bd7860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc000bd7860, 0xc00050c980)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2341
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2529 [IO wait]:
internal/poll.runtime_pollWait(0x7f192869b350, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d62f00?, 0xc001a3545b?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d62f00, {0xc001a3545b, 0x1eba5, 0x1eba5})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009aa18, {0xc001a3545b?, 0xc000095d30?, 0x1fe7b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001bda450, {0x36b2fe0, 0xc0008c8070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36b3120, 0xc001bda450}, {0x36b2fe0, 0xc0008c8070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00009aa18?, {0x36b3120, 0xc001bda450})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc00009aa18, {0x36b3120, 0xc001bda450})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x36b3120, 0xc001bda450}, {0x36b3040, 0xc00009aa18}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0019ca420?)
	/usr/local/go/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2527
	/usr/local/go/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 1724 [chan receive, 18 minutes]:
testing.(*T).Run(0xc00003f040, {0x264fece?, 0x551353?}, 0x315a870)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00003f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00003f040, 0x315a698)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1873 [chan receive, 14 minutes]:
testing.(*T).Run(0xc0014b21a0, {0x265146a?, 0x0?}, 0xc000752a80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014b21a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0014b21a0, 0xc000c9a0c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1872
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2449 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2448
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1974 [chan receive, 12 minutes]:
testing.(*T).Run(0xc0014b29c0, {0x265146a?, 0x0?}, 0xc000752e00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014b29c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0014b29c0, 0xc000c9a240)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1872
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2454 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001c2c4c0, 0xc0000602a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2366
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2153 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0008c67d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001984820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001984820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001984820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001984820, 0xc000822680)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1994
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2063 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0008c67d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001e7e820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001e7e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001e7e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001e7e820, 0xc00050c400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1994
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1995 [chan receive, 16 minutes]:
testing.(*testContext).waitParallel(0xc0008c67d0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0014b3040)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0014b3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0014b3040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0014b3040, 0xc00098a000)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1994
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2356 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001b0ec00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2370
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2453 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013cac00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2366
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.1/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                    

Test pass (165/210)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 52.22
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.30.1/json-events 13.77
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.06
18 TestDownloadOnly/v1.30.1/DeleteAll 0.13
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.54
22 TestOffline 62.66
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
28 TestCertOptions 80.97
29 TestCertExpiration 265.55
31 TestForceSystemdFlag 47.71
32 TestForceSystemdEnv 66.88
34 TestKVMDriverInstallOrUpdate 3.82
38 TestErrorSpam/setup 39.51
39 TestErrorSpam/start 0.32
40 TestErrorSpam/status 0.69
41 TestErrorSpam/pause 1.52
42 TestErrorSpam/unpause 1.57
43 TestErrorSpam/stop 5.19
46 TestFunctional/serial/CopySyncFile 0
47 TestFunctional/serial/StartWithProxy 55.02
48 TestFunctional/serial/AuditLog 0
49 TestFunctional/serial/SoftStart 35.69
50 TestFunctional/serial/KubeContext 0.04
51 TestFunctional/serial/KubectlGetPods 0.08
54 TestFunctional/serial/CacheCmd/cache/add_remote 3.15
55 TestFunctional/serial/CacheCmd/cache/add_local 2.21
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
59 TestFunctional/serial/CacheCmd/cache/cache_reload 1.54
60 TestFunctional/serial/CacheCmd/cache/delete 0.09
61 TestFunctional/serial/MinikubeKubectlCmd 0.09
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
63 TestFunctional/serial/ExtraConfig 49.88
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 1.49
66 TestFunctional/serial/LogsFileCmd 1.51
67 TestFunctional/serial/InvalidService 4.1
69 TestFunctional/parallel/ConfigCmd 0.31
70 TestFunctional/parallel/DashboardCmd 12.84
71 TestFunctional/parallel/DryRun 0.27
72 TestFunctional/parallel/InternationalLanguage 0.16
73 TestFunctional/parallel/StatusCmd 1.14
77 TestFunctional/parallel/ServiceCmdConnect 8.59
78 TestFunctional/parallel/AddonsCmd 0.12
79 TestFunctional/parallel/PersistentVolumeClaim 36.18
81 TestFunctional/parallel/SSHCmd 0.6
82 TestFunctional/parallel/CpCmd 1.28
83 TestFunctional/parallel/MySQL 25.7
84 TestFunctional/parallel/FileSync 0.25
85 TestFunctional/parallel/CertSync 1.53
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.41
93 TestFunctional/parallel/License 0.67
94 TestFunctional/parallel/ServiceCmd/DeployApp 10.22
95 TestFunctional/parallel/Version/short 0.05
96 TestFunctional/parallel/Version/components 0.96
97 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
98 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
99 TestFunctional/parallel/ImageCommands/ImageListJson 0.65
100 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
102 TestFunctional/parallel/ImageCommands/Setup 2.02
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.08
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.08
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
107 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.1
108 TestFunctional/parallel/ProfileCmd/profile_list 0.42
109 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
110 TestFunctional/parallel/MountCmd/any-port 22.62
111 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.23
112 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.71
113 TestFunctional/parallel/ServiceCmd/List 0.3
114 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
115 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
116 TestFunctional/parallel/ServiceCmd/Format 0.34
117 TestFunctional/parallel/ServiceCmd/URL 0.4
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.31
119 TestFunctional/parallel/ImageCommands/ImageRemove 1.01
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.44
121 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 5.12
122 TestFunctional/parallel/MountCmd/specific-port 2.04
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.46
133 TestFunctional/delete_addon-resizer_images 0.06
134 TestFunctional/delete_my-image_image 0.01
135 TestFunctional/delete_minikube_cached_images 0.01
139 TestMultiControlPlane/serial/StartCluster 206.66
140 TestMultiControlPlane/serial/DeployApp 6.51
141 TestMultiControlPlane/serial/PingHostFromPods 1.2
142 TestMultiControlPlane/serial/AddWorkerNode 47.75
143 TestMultiControlPlane/serial/NodeLabels 0.06
144 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
145 TestMultiControlPlane/serial/CopyFile 12.41
147 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.48
149 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
152 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
158 TestJSONOutput/start/Command 53.13
159 TestJSONOutput/start/Audit 0
161 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
162 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
164 TestJSONOutput/pause/Command 0.67
165 TestJSONOutput/pause/Audit 0
167 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/unpause/Command 0.6
171 TestJSONOutput/unpause/Audit 0
173 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/stop/Command 9.62
177 TestJSONOutput/stop/Audit 0
179 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
181 TestErrorJSONOutput 0.18
186 TestMainNoArgs 0.04
187 TestMinikubeProfile 81.61
190 TestMountStart/serial/StartWithMountFirst 27.92
191 TestMountStart/serial/VerifyMountFirst 0.36
192 TestMountStart/serial/StartWithMountSecond 27.75
193 TestMountStart/serial/VerifyMountSecond 0.35
194 TestMountStart/serial/DeleteFirst 0.66
195 TestMountStart/serial/VerifyMountPostDelete 0.35
196 TestMountStart/serial/Stop 1.26
197 TestMountStart/serial/RestartStopped 23.34
198 TestMountStart/serial/VerifyMountPostStop 0.36
201 TestMultiNode/serial/FreshStart2Nodes 101.88
202 TestMultiNode/serial/DeployApp2Nodes 5.33
203 TestMultiNode/serial/PingHostFrom2Pods 0.8
204 TestMultiNode/serial/AddNode 38.26
205 TestMultiNode/serial/MultiNodeLabels 0.06
206 TestMultiNode/serial/ProfileList 0.21
207 TestMultiNode/serial/CopyFile 6.96
208 TestMultiNode/serial/StopNode 2.29
209 TestMultiNode/serial/StartAfterStop 28.64
211 TestMultiNode/serial/DeleteNode 2.15
213 TestMultiNode/serial/RestartMultiNode 188.56
214 TestMultiNode/serial/ValidateNameConflict 43.08
221 TestScheduledStopUnix 115.09
225 TestRunningBinaryUpgrade 204.52
229 TestStoppedBinaryUpgrade/Setup 2.72
230 TestStoppedBinaryUpgrade/Upgrade 174.9
239 TestPause/serial/Start 98.29
240 TestPause/serial/SecondStartNoReconfiguration 33.51
241 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
244 TestNoKubernetes/serial/StartWithK8s 45.32
245 TestPause/serial/Pause 0.69
246 TestPause/serial/VerifyStatus 0.24
247 TestPause/serial/Unpause 0.65
248 TestPause/serial/PauseAgain 0.8
249 TestPause/serial/DeletePaused 0.95
250 TestPause/serial/VerifyDeletedResources 0.38
262 TestNoKubernetes/serial/StartWithStopK8s 43.54
263 TestNoKubernetes/serial/Start 46.52
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
265 TestNoKubernetes/serial/ProfileList 0.84
266 TestNoKubernetes/serial/Stop 1.32
267 TestNoKubernetes/serial/StartNoArgs 62.39
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
x
+
TestDownloadOnly/v1.20.0/json-events (52.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-642435 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-642435 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (52.222455s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (52.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-642435
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-642435: exit status 85 (56.733845ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-642435 | jenkins | v1.33.1 | 31 May 24 17:55 UTC |          |
	|         | -p download-only-642435        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 17:55:25
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:55:25.544148   18384 out.go:291] Setting OutFile to fd 1 ...
	I0531 17:55:25.544403   18384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 17:55:25.544412   18384 out.go:304] Setting ErrFile to fd 2...
	I0531 17:55:25.544416   18384 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 17:55:25.544562   18384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	W0531 17:55:25.544659   18384 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18995-11107/.minikube/config/config.json: open /home/jenkins/minikube-integration/18995-11107/.minikube/config/config.json: no such file or directory
	I0531 17:55:25.545222   18384 out.go:298] Setting JSON to true
	I0531 17:55:25.546115   18384 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2271,"bootTime":1717175854,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:55:25.546169   18384 start.go:139] virtualization: kvm guest
	I0531 17:55:25.548561   18384 out.go:97] [download-only-642435] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:55:25.549982   18384 out.go:169] MINIKUBE_LOCATION=18995
	W0531 17:55:25.548661   18384 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball: no such file or directory
	I0531 17:55:25.548701   18384 notify.go:220] Checking for updates...
	I0531 17:55:25.551337   18384 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:55:25.552512   18384 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 17:55:25.553720   18384 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 17:55:25.554827   18384 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0531 17:55:25.556986   18384 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 17:55:25.557176   18384 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 17:55:25.652840   18384 out.go:97] Using the kvm2 driver based on user configuration
	I0531 17:55:25.652865   18384 start.go:297] selected driver: kvm2
	I0531 17:55:25.652876   18384 start.go:901] validating driver "kvm2" against <nil>
	I0531 17:55:25.653339   18384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:55:25.653481   18384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 17:55:25.668233   18384 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 17:55:25.668292   18384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0531 17:55:25.668753   18384 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0531 17:55:25.668887   18384 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 17:55:25.668939   18384 cni.go:84] Creating CNI manager for ""
	I0531 17:55:25.668951   18384 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 17:55:25.668983   18384 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0531 17:55:25.669048   18384 start.go:340] cluster config:
	{Name:download-only-642435 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-642435 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 17:55:25.669253   18384 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:55:25.671177   18384 out.go:97] Downloading VM boot image ...
	I0531 17:55:25.671204   18384 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/iso/amd64/minikube-v1.33.1-1716398070-18934-amd64.iso
	I0531 17:55:36.075644   18384 out.go:97] Starting "download-only-642435" primary control-plane node in "download-only-642435" cluster
	I0531 17:55:36.075659   18384 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0531 17:55:36.183840   18384 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0531 17:55:36.183867   18384 cache.go:56] Caching tarball of preloaded images
	I0531 17:55:36.184034   18384 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0531 17:55:36.185763   18384 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0531 17:55:36.185776   18384 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0531 17:55:36.299086   18384 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0531 17:55:50.459497   18384 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0531 17:55:50.459589   18384 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0531 17:55:51.358754   18384 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0531 17:55:51.359140   18384 profile.go:143] Saving config to /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/download-only-642435/config.json ...
	I0531 17:55:51.359171   18384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/download-only-642435/config.json: {Name:mka7084d49070e7cf6c721f5a0e90156b15e05d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0531 17:55:51.359319   18384 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0531 17:55:51.359475   18384 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-642435 host does not exist
	  To start a cluster, run: "minikube start -p download-only-642435"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-642435
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (13.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-213136 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-213136 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (13.766827449s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (13.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-213136
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-213136: exit status 85 (56.573588ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-642435 | jenkins | v1.33.1 | 31 May 24 17:55 UTC |                     |
	|         | -p download-only-642435        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 31 May 24 17:56 UTC | 31 May 24 17:56 UTC |
	| delete  | -p download-only-642435        | download-only-642435 | jenkins | v1.33.1 | 31 May 24 17:56 UTC | 31 May 24 17:56 UTC |
	| start   | -o=json --download-only        | download-only-213136 | jenkins | v1.33.1 | 31 May 24 17:56 UTC |                     |
	|         | -p download-only-213136        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/31 17:56:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0531 17:56:18.063823   18707 out.go:291] Setting OutFile to fd 1 ...
	I0531 17:56:18.064065   18707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:18.064073   18707 out.go:304] Setting ErrFile to fd 2...
	I0531 17:56:18.064078   18707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 17:56:18.064220   18707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 17:56:18.064739   18707 out.go:298] Setting JSON to true
	I0531 17:56:18.065585   18707 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2324,"bootTime":1717175854,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 17:56:18.065645   18707 start.go:139] virtualization: kvm guest
	I0531 17:56:18.067804   18707 out.go:97] [download-only-213136] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 17:56:18.069111   18707 out.go:169] MINIKUBE_LOCATION=18995
	I0531 17:56:18.067920   18707 notify.go:220] Checking for updates...
	I0531 17:56:18.071443   18707 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 17:56:18.072629   18707 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 17:56:18.073713   18707 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 17:56:18.074992   18707 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0531 17:56:18.077492   18707 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0531 17:56:18.077746   18707 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 17:56:18.107969   18707 out.go:97] Using the kvm2 driver based on user configuration
	I0531 17:56:18.108005   18707 start.go:297] selected driver: kvm2
	I0531 17:56:18.108012   18707 start.go:901] validating driver "kvm2" against <nil>
	I0531 17:56:18.108440   18707 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:18.108527   18707 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18995-11107/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0531 17:56:18.123306   18707 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0531 17:56:18.123368   18707 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0531 17:56:18.123995   18707 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0531 17:56:18.124180   18707 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0531 17:56:18.124256   18707 cni.go:84] Creating CNI manager for ""
	I0531 17:56:18.124272   18707 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0531 17:56:18.124286   18707 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0531 17:56:18.124352   18707 start.go:340] cluster config:
	{Name:download-only-213136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-213136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 17:56:18.124477   18707 iso.go:125] acquiring lock: {Name:mk4f81a8fb1d94455dc58d649cbf0d7465ab79c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0531 17:56:18.126586   18707 out.go:97] Starting "download-only-213136" primary control-plane node in "download-only-213136" cluster
	I0531 17:56:18.126605   18707 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 17:56:18.244208   18707 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	I0531 17:56:18.244241   18707 cache.go:56] Caching tarball of preloaded images
	I0531 17:56:18.244380   18707 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime crio
	I0531 17:56:18.245983   18707 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0531 17:56:18.246001   18707 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4 ...
	I0531 17:56:18.359013   18707 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:a8c8ea593b2bc93a46ce7b040a44f86d -> /home/jenkins/minikube-integration/18995-11107/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-213136 host does not exist
	  To start a cluster, run: "minikube start -p download-only-213136"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-213136
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-412647 --alsologtostderr --binary-mirror http://127.0.0.1:37113 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-412647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-412647
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestOffline (62.66s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-285002 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-285002 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m1.647631055s)
helpers_test.go:175: Cleaning up "offline-crio-285002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-285002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-285002: (1.010972528s)
--- PASS: TestOffline (62.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-705710
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-705710: exit status 85 (45.749358ms)

                                                
                                                
-- stdout --
	* Profile "addons-705710" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-705710"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-705710
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-705710: exit status 85 (45.324332ms)

                                                
                                                
-- stdout --
	* Profile "addons-705710" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-705710"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (80.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-396055 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-396055 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m19.513790128s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-396055 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-396055 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-396055 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-396055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-396055
--- PASS: TestCertOptions (80.97s)

                                                
                                    
x
+
TestCertExpiration (265.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-149181 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0531 19:39:58.917085   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-149181 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (53.317823705s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-149181 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-149181 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.241192207s)
helpers_test.go:175: Cleaning up "cert-expiration-149181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-149181
--- PASS: TestCertExpiration (265.55s)

                                                
                                    
x
+
TestForceSystemdFlag (47.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-285796 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-285796 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (46.460092785s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-285796 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-285796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-285796
--- PASS: TestForceSystemdFlag (47.71s)

                                                
                                    
x
+
TestForceSystemdEnv (66.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-726812 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-726812 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m5.903387098s)
helpers_test.go:175: Cleaning up "force-systemd-env-726812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-726812
--- PASS: TestForceSystemdEnv (66.88s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.82s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.82s)

                                                
                                    
x
+
TestErrorSpam/setup (39.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-760126 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-760126 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-760126 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-760126 --driver=kvm2  --container-runtime=crio: (39.510351101s)
--- PASS: TestErrorSpam/setup (39.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 pause
--- PASS: TestErrorSpam/pause (1.52s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 unpause
--- PASS: TestErrorSpam/unpause (1.57s)

                                                
                                    
x
+
TestErrorSpam/stop (5.19s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 stop: (2.277391609s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 stop: (1.983116684s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-760126 --log_dir /tmp/nospam-760126 stop
--- PASS: TestErrorSpam/stop (5.19s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18995-11107/.minikube/files/etc/test/nested/copy/18372/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.02s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-928436 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-928436 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (55.016222699s)
--- PASS: TestFunctional/serial/StartWithProxy (55.02s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (35.69s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-928436 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-928436 --alsologtostderr -v=8: (35.685576325s)
functional_test.go:659: soft start took 35.686364108s for "functional-928436" cluster.
--- PASS: TestFunctional/serial/SoftStart (35.69s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-928436 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 cache add registry.k8s.io/pause:3.3: (1.11975105s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 cache add registry.k8s.io/pause:latest: (1.066165474s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-928436 /tmp/TestFunctionalserialCacheCmdcacheadd_local3242948699/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cache add minikube-local-cache-test:functional-928436
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 cache add minikube-local-cache-test:functional-928436: (1.879324476s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cache delete minikube-local-cache-test:functional-928436
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-928436
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (205.884085ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 kubectl -- --context functional-928436 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-928436 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (49.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-928436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-928436 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (49.877552192s)
functional_test.go:757: restart took 49.877675544s for "functional-928436" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (49.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-928436 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 logs: (1.489404983s)
--- PASS: TestFunctional/serial/LogsCmd (1.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 logs --file /tmp/TestFunctionalserialLogsFileCmd460960645/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 logs --file /tmp/TestFunctionalserialLogsFileCmd460960645/001/logs.txt: (1.510222539s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-928436 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-928436
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-928436: exit status 115 (268.538932ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.117:32439 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-928436 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 config get cpus: exit status 14 (52.447674ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 config get cpus: exit status 14 (45.480306ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-928436 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-928436 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 32600: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-928436 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-928436 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (136.592439ms)

                                                
                                                
-- stdout --
	* [functional-928436] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:40:28.535059   32493 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:40:28.535263   32493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:40:28.535270   32493 out.go:304] Setting ErrFile to fd 2...
	I0531 18:40:28.535275   32493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:40:28.535424   32493 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:40:28.535920   32493 out.go:298] Setting JSON to false
	I0531 18:40:28.536793   32493 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4974,"bootTime":1717175854,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:40:28.536847   32493 start.go:139] virtualization: kvm guest
	I0531 18:40:28.538704   32493 out.go:177] * [functional-928436] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0531 18:40:28.539890   32493 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 18:40:28.541174   32493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:40:28.539927   32493 notify.go:220] Checking for updates...
	I0531 18:40:28.543659   32493 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:40:28.544727   32493 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:40:28.545960   32493 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:40:28.547263   32493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:40:28.548824   32493 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:40:28.549290   32493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:40:28.549337   32493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:40:28.563918   32493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37821
	I0531 18:40:28.564400   32493 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:40:28.564934   32493 main.go:141] libmachine: Using API Version  1
	I0531 18:40:28.565001   32493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:40:28.565373   32493 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:40:28.565602   32493 main.go:141] libmachine: (functional-928436) Calling .DriverName
	I0531 18:40:28.565877   32493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 18:40:28.566193   32493 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:40:28.566231   32493 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:40:28.581032   32493 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45443
	I0531 18:40:28.581407   32493 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:40:28.581828   32493 main.go:141] libmachine: Using API Version  1
	I0531 18:40:28.581842   32493 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:40:28.582161   32493 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:40:28.582328   32493 main.go:141] libmachine: (functional-928436) Calling .DriverName
	I0531 18:40:28.615077   32493 out.go:177] * Using the kvm2 driver based on existing profile
	I0531 18:40:28.616479   32493 start.go:297] selected driver: kvm2
	I0531 18:40:28.616500   32493 start.go:901] validating driver "kvm2" against &{Name:functional-928436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-928436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:40:28.616648   32493 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:40:28.619631   32493 out.go:177] 
	W0531 18:40:28.620807   32493 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0531 18:40:28.622083   32493 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-928436 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-928436 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-928436 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (161.003869ms)

                                                
                                                
-- stdout --
	* [functional-928436] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 18:40:01.086419   30882 out.go:291] Setting OutFile to fd 1 ...
	I0531 18:40:01.086574   30882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:40:01.086586   30882 out.go:304] Setting ErrFile to fd 2...
	I0531 18:40:01.086593   30882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 18:40:01.086938   30882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 18:40:01.087604   30882 out.go:298] Setting JSON to false
	I0531 18:40:01.088841   30882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4947,"bootTime":1717175854,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0531 18:40:01.088925   30882 start.go:139] virtualization: kvm guest
	I0531 18:40:01.091265   30882 out.go:177] * [functional-928436] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0531 18:40:01.092900   30882 out.go:177]   - MINIKUBE_LOCATION=18995
	I0531 18:40:01.094064   30882 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0531 18:40:01.092901   30882 notify.go:220] Checking for updates...
	I0531 18:40:01.096362   30882 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	I0531 18:40:01.097619   30882 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	I0531 18:40:01.098861   30882 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0531 18:40:01.100076   30882 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0531 18:40:01.101803   30882 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 18:40:01.102420   30882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:40:01.102496   30882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:40:01.124374   30882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41273
	I0531 18:40:01.124929   30882 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:40:01.125695   30882 main.go:141] libmachine: Using API Version  1
	I0531 18:40:01.125723   30882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:40:01.126131   30882 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:40:01.126311   30882 main.go:141] libmachine: (functional-928436) Calling .DriverName
	I0531 18:40:01.126801   30882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0531 18:40:01.127241   30882 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 18:40:01.127295   30882 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 18:40:01.146420   30882 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40245
	I0531 18:40:01.146819   30882 main.go:141] libmachine: () Calling .GetVersion
	I0531 18:40:01.147307   30882 main.go:141] libmachine: Using API Version  1
	I0531 18:40:01.147339   30882 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 18:40:01.147670   30882 main.go:141] libmachine: () Calling .GetMachineName
	I0531 18:40:01.147885   30882 main.go:141] libmachine: (functional-928436) Calling .DriverName
	I0531 18:40:01.182794   30882 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0531 18:40:01.184068   30882 start.go:297] selected driver: kvm2
	I0531 18:40:01.184083   30882 start.go:901] validating driver "kvm2" against &{Name:functional-928436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18934/minikube-v1.33.1-1716398070-18934-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1716228441-18934@sha256:628b3f20803bc9c4302fd048087dd36cf2ff5dc9ab0ded395ec3288e2f1d0862 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.1 ClusterName:functional-928436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.117 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0531 18:40:01.184199   30882 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0531 18:40:01.186126   30882 out.go:177] 
	W0531 18:40:01.187442   30882 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0531 18:40:01.188601   30882 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-928436 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-928436 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-tcbz6" [0cade7ec-e925-425e-a550-1a584bc85363] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-tcbz6" [0cade7ec-e925-425e-a550-1a584bc85363] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005107567s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.117:31801
functional_test.go:1671: http://192.168.39.117:31801: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-tcbz6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.117:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.117:31801
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6fcda199-026c-4cb2-87b2-4fa35676ed2c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004165983s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-928436 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-928436 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-928436 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-928436 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-928436 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5f66331f-9b0d-4a74-a391-656c1ef16340] Pending
helpers_test.go:344: "sp-pod" [5f66331f-9b0d-4a74-a391-656c1ef16340] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5f66331f-9b0d-4a74-a391-656c1ef16340] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.168116186s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-928436 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-928436 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-928436 delete -f testdata/storage-provisioner/pod.yaml: (2.458305807s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-928436 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f09e6144-b2d7-47c3-8c71-2caf716b6062] Pending
2024/05/31 18:40:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:344: "sp-pod" [f09e6144-b2d7-47c3-8c71-2caf716b6062] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f09e6144-b2d7-47c3-8c71-2caf716b6062] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.005042675s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-928436 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh -n functional-928436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cp functional-928436:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3561142974/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh -n functional-928436 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh -n functional-928436 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-928436 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-whklv" [7189e6dd-bb69-4afa-8a51-d6d5e65e69f0] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-whklv" [7189e6dd-bb69-4afa-8a51-d6d5e65e69f0] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.004825879s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-928436 exec mysql-64454c8b5c-whklv -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-928436 exec mysql-64454c8b5c-whklv -- mysql -ppassword -e "show databases;": exit status 1 (404.947992ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-928436 exec mysql-64454c8b5c-whklv -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.70s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18372/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo cat /etc/test/nested/copy/18372/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18372.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo cat /etc/ssl/certs/18372.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18372.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo cat /usr/share/ca-certificates/18372.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/183722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo cat /etc/ssl/certs/183722.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/183722.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo cat /usr/share/ca-certificates/183722.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-928436 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 ssh "sudo systemctl is-active docker": exit status 1 (202.755157ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 ssh "sudo systemctl is-active containerd": exit status 1 (208.973439ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-928436 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-928436 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-s72bd" [4e271007-78f3-49d5-a04a-a6bfc092d1fe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-s72bd" [4e271007-78f3-49d5-a04a-a6bfc092d1fe] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.017986699s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-928436 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-928436
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-928436
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240202-8f1494ea
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-928436 image ls --format short --alsologtostderr:
I0531 18:40:30.612498   32730 out.go:291] Setting OutFile to fd 1 ...
I0531 18:40:30.612751   32730 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:30.612761   32730 out.go:304] Setting ErrFile to fd 2...
I0531 18:40:30.612765   32730 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:30.613025   32730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
I0531 18:40:30.613570   32730 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:30.613670   32730 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:30.614024   32730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:30.614064   32730 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:30.628331   32730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34321
I0531 18:40:30.628713   32730 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:30.629338   32730 main.go:141] libmachine: Using API Version  1
I0531 18:40:30.629367   32730 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:30.629688   32730 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:30.629864   32730 main.go:141] libmachine: (functional-928436) Calling .GetState
I0531 18:40:30.631477   32730 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:30.631508   32730 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:30.645216   32730 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
I0531 18:40:30.645614   32730 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:30.646079   32730 main.go:141] libmachine: Using API Version  1
I0531 18:40:30.646101   32730 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:30.646389   32730 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:30.646547   32730 main.go:141] libmachine: (functional-928436) Calling .DriverName
I0531 18:40:30.646738   32730 ssh_runner.go:195] Run: systemctl --version
I0531 18:40:30.646759   32730 main.go:141] libmachine: (functional-928436) Calling .GetSSHHostname
I0531 18:40:30.649174   32730 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:30.649579   32730 main.go:141] libmachine: (functional-928436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:85:f7", ip: ""} in network mk-functional-928436: {Iface:virbr1 ExpiryTime:2024-05-31 19:37:37 +0000 UTC Type:0 Mac:52:54:00:03:85:f7 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-928436 Clientid:01:52:54:00:03:85:f7}
I0531 18:40:30.649604   32730 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined IP address 192.168.39.117 and MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:30.649770   32730 main.go:141] libmachine: (functional-928436) Calling .GetSSHPort
I0531 18:40:30.649906   32730 main.go:141] libmachine: (functional-928436) Calling .GetSSHKeyPath
I0531 18:40:30.650030   32730 main.go:141] libmachine: (functional-928436) Calling .GetSSHUsername
I0531 18:40:30.650147   32730 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/functional-928436/id_rsa Username:docker}
I0531 18:40:30.747187   32730 ssh_runner.go:195] Run: sudo crictl images --output json
I0531 18:40:30.792238   32730 main.go:141] libmachine: Making call to close driver server
I0531 18:40:30.792255   32730 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:30.792563   32730 main.go:141] libmachine: (functional-928436) DBG | Closing plugin on server side
I0531 18:40:30.792568   32730 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:30.792590   32730 main.go:141] libmachine: Making call to close connection to plugin binary
I0531 18:40:30.792603   32730 main.go:141] libmachine: Making call to close driver server
I0531 18:40:30.792622   32730 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:30.792830   32730 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:30.792846   32730 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-928436 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | 4f67c83422ec7 | 192MB  |
| localhost/minikube-local-cache-test     | functional-928436  | 5dda9874b9b0e | 3.33kB |
| registry.k8s.io/kube-controller-manager | v1.30.1            | 25a1387cdab82 | 112MB  |
| registry.k8s.io/kube-proxy              | v1.30.1            | 747097150317f | 85.9MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/kube-apiserver          | v1.30.1            | 91be940803172 | 118MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/google-containers/addon-resizer  | functional-928436  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| docker.io/kindest/kindnetd              | v20240202-8f1494ea | 4950bb10b3f87 | 65.3MB |
| localhost/my-image                      | functional-928436  | 2f397eb07193d | 1.47MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-scheduler          | v1.30.1            | a52dc94f0a912 | 63MB   |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-928436 image ls --format table --alsologtostderr:
I0531 18:40:37.502797   32979 out.go:291] Setting OutFile to fd 1 ...
I0531 18:40:37.503176   32979 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:37.503192   32979 out.go:304] Setting ErrFile to fd 2...
I0531 18:40:37.503204   32979 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:37.503451   32979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
I0531 18:40:37.504212   32979 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:37.504362   32979 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:37.504946   32979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:37.505019   32979 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:37.523394   32979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
I0531 18:40:37.523803   32979 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:37.524358   32979 main.go:141] libmachine: Using API Version  1
I0531 18:40:37.524376   32979 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:37.524734   32979 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:37.524946   32979 main.go:141] libmachine: (functional-928436) Calling .GetState
I0531 18:40:37.526830   32979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:37.526864   32979 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:37.541570   32979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33265
I0531 18:40:37.541950   32979 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:37.542419   32979 main.go:141] libmachine: Using API Version  1
I0531 18:40:37.542433   32979 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:37.542787   32979 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:37.542979   32979 main.go:141] libmachine: (functional-928436) Calling .DriverName
I0531 18:40:37.543224   32979 ssh_runner.go:195] Run: systemctl --version
I0531 18:40:37.543272   32979 main.go:141] libmachine: (functional-928436) Calling .GetSSHHostname
I0531 18:40:37.546189   32979 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:37.546682   32979 main.go:141] libmachine: (functional-928436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:85:f7", ip: ""} in network mk-functional-928436: {Iface:virbr1 ExpiryTime:2024-05-31 19:37:37 +0000 UTC Type:0 Mac:52:54:00:03:85:f7 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-928436 Clientid:01:52:54:00:03:85:f7}
I0531 18:40:37.546710   32979 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined IP address 192.168.39.117 and MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:37.546814   32979 main.go:141] libmachine: (functional-928436) Calling .GetSSHPort
I0531 18:40:37.546970   32979 main.go:141] libmachine: (functional-928436) Calling .GetSSHKeyPath
I0531 18:40:37.547106   32979 main.go:141] libmachine: (functional-928436) Calling .GetSSHUsername
I0531 18:40:37.547273   32979 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/functional-928436/id_rsa Username:docker}
I0531 18:40:37.652250   32979 ssh_runner.go:195] Run: sudo crictl images --output json
I0531 18:40:37.720641   32979 main.go:141] libmachine: Making call to close driver server
I0531 18:40:37.720661   32979 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:37.720945   32979 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:37.720978   32979 main.go:141] libmachine: Making call to close connection to plugin binary
I0531 18:40:37.720994   32979 main.go:141] libmachine: (functional-928436) DBG | Closing plugin on server side
I0531 18:40:37.720998   32979 main.go:141] libmachine: Making call to close driver server
I0531 18:40:37.721010   32979 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:37.721254   32979 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:37.721268   32979 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-928436 image ls --format json --alsologtostderr:
[{"id":"5dda9874b9b0ea5789aa3d9b35ff7e46625a42f6f7af913da833465c44701fb2","repoDigests":["localhost/minikube-local-cache-test@sha256:e14228c57f1900e680dd68b904623cc6247047e9fdf82d8e6a6e7a77c06678ec"],"repoTags":["localhost/minikube-local-cache-test:functional-928436"],"size":"3330"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52","registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027"],"repoTags":["registry.k8s.io/kube-contro
ller-manager:v1.30.1"],"size":"112170310"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100","repoDigests":["docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d88
5b6c951fdcf42589d06845f64d18aec6a3c4d","docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232"],"repoTags":["docker.io/library/nginx:latest"],"size":"191814165"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5","repoDigests":["docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988","docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"],"repoTags":["docker.io/kindest/kindnetd:v20240202-8f1494ea"],"size":"65291810"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be458
1310a879a096c71a","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea","registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"size":"117601759"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"2f397eb07193d878c2ea20a1ff0abe9bdbebf7ff62ef65de433579a1267838ba","repoDigests":["localhost/my-image@sha256:19149a1be4b3e2a0e0e66d4b2d37a820595e9a4d05303d9697f2ddaf33fd5f05"],"repoTags":["localhost/my-image:functional-928436"],"size":"1468600"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":["registry.k8s.io/kube-proxy
@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa","registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"85933465"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035","repoDigests":["registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036","registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"63026504"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-928436"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busyb
ox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ed94c4a79c994391dc3291b6496253a4ad6b661e86fa6ca4ec131f998ee7143c","repoDigests":["docker.io/library/91174d58c4659fcb90c9c6189756bd9df40b1b460af86b2acc05c455b
b339a0c-tmp@sha256:662d227cef38ccebde280a0b2ed060b846dcba7b109dd2b36541fab8dc4f0821"],"repoTags":[],"size":"1466018"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-928436 image ls --format json --alsologtostderr:
I0531 18:40:36.894750   32903 out.go:291] Setting OutFile to fd 1 ...
I0531 18:40:36.894984   32903 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:36.894993   32903 out.go:304] Setting ErrFile to fd 2...
I0531 18:40:36.894997   32903 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:36.895142   32903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
I0531 18:40:36.895664   32903 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:36.895749   32903 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:36.896078   32903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:36.896126   32903 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:36.910488   32903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
I0531 18:40:36.911056   32903 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:36.911588   32903 main.go:141] libmachine: Using API Version  1
I0531 18:40:36.911614   32903 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:36.911983   32903 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:36.912201   32903 main.go:141] libmachine: (functional-928436) Calling .GetState
I0531 18:40:36.914075   32903 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:36.914115   32903 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:36.929366   32903 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37115
I0531 18:40:36.929757   32903 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:36.930251   32903 main.go:141] libmachine: Using API Version  1
I0531 18:40:36.930290   32903 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:36.930594   32903 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:36.930810   32903 main.go:141] libmachine: (functional-928436) Calling .DriverName
I0531 18:40:36.931047   32903 ssh_runner.go:195] Run: systemctl --version
I0531 18:40:36.931080   32903 main.go:141] libmachine: (functional-928436) Calling .GetSSHHostname
I0531 18:40:36.933584   32903 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:36.933998   32903 main.go:141] libmachine: (functional-928436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:85:f7", ip: ""} in network mk-functional-928436: {Iface:virbr1 ExpiryTime:2024-05-31 19:37:37 +0000 UTC Type:0 Mac:52:54:00:03:85:f7 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-928436 Clientid:01:52:54:00:03:85:f7}
I0531 18:40:36.934025   32903 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined IP address 192.168.39.117 and MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:36.934145   32903 main.go:141] libmachine: (functional-928436) Calling .GetSSHPort
I0531 18:40:36.934334   32903 main.go:141] libmachine: (functional-928436) Calling .GetSSHKeyPath
I0531 18:40:36.934500   32903 main.go:141] libmachine: (functional-928436) Calling .GetSSHUsername
I0531 18:40:36.934635   32903 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/functional-928436/id_rsa Username:docker}
I0531 18:40:37.016599   32903 ssh_runner.go:195] Run: sudo crictl images --output json
I0531 18:40:37.497815   32903 main.go:141] libmachine: Making call to close driver server
I0531 18:40:37.497833   32903 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:37.498139   32903 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:37.498159   32903 main.go:141] libmachine: Making call to close connection to plugin binary
I0531 18:40:37.498170   32903 main.go:141] libmachine: Making call to close driver server
I0531 18:40:37.498178   32903 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:37.498229   32903 main.go:141] libmachine: (functional-928436) DBG | Closing plugin on server side
I0531 18:40:37.498383   32903 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:37.498459   32903 main.go:141] libmachine: Making call to close connection to plugin binary
I0531 18:40:37.498474   32903 main.go:141] libmachine: (functional-928436) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-928436 image ls --format yaml --alsologtostderr:
- id: 5dda9874b9b0ea5789aa3d9b35ff7e46625a42f6f7af913da833465c44701fb2
repoDigests:
- localhost/minikube-local-cache-test@sha256:e14228c57f1900e680dd68b904623cc6247047e9fdf82d8e6a6e7a77c06678ec
repoTags:
- localhost/minikube-local-cache-test:functional-928436
size: "3330"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:2eec8116ed9b8f46b6a90a46434711354d2222575ab50a4aca42bb6ab19989fa
- registry.k8s.io/kube-proxy@sha256:a1754e5a33878878e78dd0141167e7c529d91eb9b36ffbbf91a6052257b3179c
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "85933465"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 4f67c83422ec747235357c04556616234e66fc3fa39cb4f40b2d4441ddd8f100
repoDigests:
- docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
- docker.io/library/nginx@sha256:1445eb9c6dc5e9619346c836ef6fbd6a95092e4663f27dcfce116f051cdbd232
repoTags:
- docker.io/library/nginx:latest
size: "191814165"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-928436
size: "34114467"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5
repoDigests:
- docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988
- docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac
repoTags:
- docker.io/kindest/kindnetd:v20240202-8f1494ea
size: "65291810"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0d4a3051234387b78affbcde283dcde5df21e0d6d740c80c363db1cbb973b4ea
- registry.k8s.io/kube-apiserver@sha256:a9cf4f4eb92ef02b0a8ba4148f50b4a1b2bd3e9b28a8f9913ea8c3bcc08e610c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117601759"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:0c34190fbf807746f6584104811ed5cda72fb30ce30a036c132dea692d55ec52
- registry.k8s.io/kube-controller-manager@sha256:110a010162e119e768e13bb104c0883fb4aceb894659787744abf115fcc56027
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "112170310"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:74d02f6debc5ff3d3bc03f96ae029fb9c72ec1ea94c14e2cdf279939d8e0e036
- registry.k8s.io/kube-scheduler@sha256:8ebcbcb8ecc9fc76029ac1dc12f3f15e33e6d26f018d49d5db4437f3d4b34973
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "63026504"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-928436 image ls --format yaml --alsologtostderr:
I0531 18:40:30.834858   32754 out.go:291] Setting OutFile to fd 1 ...
I0531 18:40:30.835103   32754 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:30.835112   32754 out.go:304] Setting ErrFile to fd 2...
I0531 18:40:30.835117   32754 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0531 18:40:30.835283   32754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
I0531 18:40:30.835777   32754 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:30.835863   32754 config.go:182] Loaded profile config "functional-928436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
I0531 18:40:30.836194   32754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:30.836247   32754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:30.850994   32754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46067
I0531 18:40:30.851413   32754 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:30.851903   32754 main.go:141] libmachine: Using API Version  1
I0531 18:40:30.851922   32754 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:30.852263   32754 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:30.852448   32754 main.go:141] libmachine: (functional-928436) Calling .GetState
I0531 18:40:30.854068   32754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0531 18:40:30.854104   32754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0531 18:40:30.868732   32754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42215
I0531 18:40:30.869101   32754 main.go:141] libmachine: () Calling .GetVersion
I0531 18:40:30.869595   32754 main.go:141] libmachine: Using API Version  1
I0531 18:40:30.869636   32754 main.go:141] libmachine: () Calling .SetConfigRaw
I0531 18:40:30.869935   32754 main.go:141] libmachine: () Calling .GetMachineName
I0531 18:40:30.870103   32754 main.go:141] libmachine: (functional-928436) Calling .DriverName
I0531 18:40:30.870287   32754 ssh_runner.go:195] Run: systemctl --version
I0531 18:40:30.870307   32754 main.go:141] libmachine: (functional-928436) Calling .GetSSHHostname
I0531 18:40:30.872831   32754 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:30.873215   32754 main.go:141] libmachine: (functional-928436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:03:85:f7", ip: ""} in network mk-functional-928436: {Iface:virbr1 ExpiryTime:2024-05-31 19:37:37 +0000 UTC Type:0 Mac:52:54:00:03:85:f7 Iaid: IPaddr:192.168.39.117 Prefix:24 Hostname:functional-928436 Clientid:01:52:54:00:03:85:f7}
I0531 18:40:30.873243   32754 main.go:141] libmachine: (functional-928436) DBG | domain functional-928436 has defined IP address 192.168.39.117 and MAC address 52:54:00:03:85:f7 in network mk-functional-928436
I0531 18:40:30.873375   32754 main.go:141] libmachine: (functional-928436) Calling .GetSSHPort
I0531 18:40:30.873508   32754 main.go:141] libmachine: (functional-928436) Calling .GetSSHKeyPath
I0531 18:40:30.873674   32754 main.go:141] libmachine: (functional-928436) Calling .GetSSHUsername
I0531 18:40:30.873829   32754 sshutil.go:53] new ssh client: &{IP:192.168.39.117 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/functional-928436/id_rsa Username:docker}
I0531 18:40:30.951217   32754 ssh_runner.go:195] Run: sudo crictl images --output json
I0531 18:40:30.993470   32754 main.go:141] libmachine: Making call to close driver server
I0531 18:40:30.993486   32754 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:30.993742   32754 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:30.993759   32754 main.go:141] libmachine: Making call to close connection to plugin binary
I0531 18:40:30.993767   32754 main.go:141] libmachine: Making call to close driver server
I0531 18:40:30.993766   32754 main.go:141] libmachine: (functional-928436) DBG | Closing plugin on server side
I0531 18:40:30.993777   32754 main.go:141] libmachine: (functional-928436) Calling .Close
I0531 18:40:30.994061   32754 main.go:141] libmachine: Successfully made call to close driver server
I0531 18:40:30.994075   32754 main.go:141] libmachine: Making call to close connection to plugin binary
I0531 18:40:30.994097   32754 main.go:141] libmachine: (functional-928436) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.996025405s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-928436
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image load --daemon gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 image load --daemon gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr: (3.905812157s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "339.081173ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "85.640238ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "290.993837ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "44.497504ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (22.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdany-port2351596585/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1717180802370909761" to /tmp/TestFunctionalparallelMountCmdany-port2351596585/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1717180802370909761" to /tmp/TestFunctionalparallelMountCmdany-port2351596585/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1717180802370909761" to /tmp/TestFunctionalparallelMountCmdany-port2351596585/001/test-1717180802370909761
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.32077ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 31 18:40 created-by-test
-rw-r--r-- 1 docker docker 24 May 31 18:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 31 18:40 test-1717180802370909761
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh cat /mount-9p/test-1717180802370909761
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-928436 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f13e0ab7-726d-46e7-b03c-caa1375dce60] Pending
helpers_test.go:344: "busybox-mount" [f13e0ab7-726d-46e7-b03c-caa1375dce60] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f13e0ab7-726d-46e7-b03c-caa1375dce60] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f13e0ab7-726d-46e7-b03c-caa1375dce60] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 20.005247265s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-928436 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdany-port2351596585/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (22.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image load --daemon gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 image load --daemon gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr: (2.566047073s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.248296096s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-928436
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image load --daemon gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 image load --daemon gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr: (8.206763404s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 service list -o json
functional_test.go:1490: Took "335.991552ms" to run "out/minikube-linux-amd64 -p functional-928436 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.117:30428
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.117:30428
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image save gcr.io/google-containers/addon-resizer:functional-928436 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 image save gcr.io/google-containers/addon-resizer:functional-928436 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.310344661s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image rm gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.035836573s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (5.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-928436
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 image save --daemon gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-928436 image save --daemon gcr.io/google-containers/addon-resizer:functional-928436 --alsologtostderr: (5.080809185s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-928436
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (5.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdspecific-port935496381/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.565441ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdspecific-port935496381/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 ssh "sudo umount -f /mount-9p": exit status 1 (257.542565ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-928436 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdspecific-port935496381/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3205544238/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3205544238/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3205544238/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T" /mount1: exit status 1 (270.849948ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-928436 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-928436 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3205544238/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3205544238/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-928436 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3205544238/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.46s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-928436
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-928436
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-928436
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-080809 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-080809 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m25.990954367s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (206.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-080809 -- rollout status deployment/busybox: (4.400873946s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-dwnsz -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-qtbxj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-wqwrw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-dwnsz -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-qtbxj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-wqwrw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-dwnsz -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-qtbxj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-wqwrw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-dwnsz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-dwnsz -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-qtbxj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-qtbxj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-wqwrw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-080809 -- exec busybox-fc5497c4f-wqwrw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (47.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-080809 -v=7 --alsologtostderr
E0531 18:44:58.917093   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:44:58.922881   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:44:58.933183   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:44:58.953446   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:44:58.993754   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:44:59.074098   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:44:59.234454   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:44:59.554858   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:45:00.195759   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:45:01.476360   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:45:04.037518   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
E0531 18:45:09.157728   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-080809 -v=7 --alsologtostderr: (46.953536137s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (47.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-080809 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp testdata/cp-test.txt ha-080809:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809:/home/docker/cp-test.txt ha-080809-m02:/home/docker/cp-test_ha-080809_ha-080809-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m02 "sudo cat /home/docker/cp-test_ha-080809_ha-080809-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809:/home/docker/cp-test.txt ha-080809-m03:/home/docker/cp-test_ha-080809_ha-080809-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m03 "sudo cat /home/docker/cp-test_ha-080809_ha-080809-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809:/home/docker/cp-test.txt ha-080809-m04:/home/docker/cp-test_ha-080809_ha-080809-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m04 "sudo cat /home/docker/cp-test_ha-080809_ha-080809-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp testdata/cp-test.txt ha-080809-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m02:/home/docker/cp-test.txt ha-080809:/home/docker/cp-test_ha-080809-m02_ha-080809.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809 "sudo cat /home/docker/cp-test_ha-080809-m02_ha-080809.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m02:/home/docker/cp-test.txt ha-080809-m03:/home/docker/cp-test_ha-080809-m02_ha-080809-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m03 "sudo cat /home/docker/cp-test_ha-080809-m02_ha-080809-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m02:/home/docker/cp-test.txt ha-080809-m04:/home/docker/cp-test_ha-080809-m02_ha-080809-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m04 "sudo cat /home/docker/cp-test_ha-080809-m02_ha-080809-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp testdata/cp-test.txt ha-080809-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt ha-080809:/home/docker/cp-test_ha-080809-m03_ha-080809.txt
E0531 18:45:19.398735   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809 "sudo cat /home/docker/cp-test_ha-080809-m03_ha-080809.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt ha-080809-m02:/home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m02 "sudo cat /home/docker/cp-test_ha-080809-m03_ha-080809-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m03:/home/docker/cp-test.txt ha-080809-m04:/home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m04 "sudo cat /home/docker/cp-test_ha-080809-m03_ha-080809-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp testdata/cp-test.txt ha-080809-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3909848547/001/cp-test_ha-080809-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt ha-080809:/home/docker/cp-test_ha-080809-m04_ha-080809.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809 "sudo cat /home/docker/cp-test_ha-080809-m04_ha-080809.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt ha-080809-m02:/home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m02 "sudo cat /home/docker/cp-test_ha-080809-m04_ha-080809-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 cp ha-080809-m04:/home/docker/cp-test.txt ha-080809-m03:/home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-080809 ssh -n ha-080809-m03 "sudo cat /home/docker/cp-test_ha-080809-m04_ha-080809-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.477513868s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-749909 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-749909 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.132159678s)
--- PASS: TestJSONOutput/start/Command (53.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-749909 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-749909 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-749909 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-749909 --output=json --user=testUser: (9.616450799s)
--- PASS: TestJSONOutput/stop/Command (9.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-018631 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-018631 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (59.129772ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ceb2fb4-f5eb-42bb-afd6-4f16774b677b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-018631] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"67729a9a-3bc4-4eb6-9947-b61acf3283be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18995"}}
	{"specversion":"1.0","id":"788d55ea-3bd0-4b90-80d4-be6f92ce945a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d3e04fbc-3d10-4cff-8b9f-a4686b8d9c7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig"}}
	{"specversion":"1.0","id":"4f326385-a9db-418c-ad06-5d0dc1c4b058","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube"}}
	{"specversion":"1.0","id":"342bdda5-1c10-4b3b-930e-ae6fc3792994","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f216f52a-741e-4c07-9e89-e36f35f4130a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3d9e989f-69f7-4ccc-a276-157c5138d3b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-018631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-018631
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (81.61s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-707434 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-707434 --driver=kvm2  --container-runtime=crio: (39.416856474s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-709950 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-709950 --driver=kvm2  --container-runtime=crio: (39.628965897s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-707434
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-709950
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-709950" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-709950
helpers_test.go:175: Cleaning up "first-707434" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-707434
--- PASS: TestMinikubeProfile (81.61s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-059094 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-059094 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.918930779s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-059094 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-059094 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.36s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-073809 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-073809 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.750605746s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073809 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073809 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.35s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-059094 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073809 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073809 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-073809
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-073809: (1.263042501s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-073809
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-073809: (22.338055349s)
--- PASS: TestMountStart/serial/RestartStopped (23.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073809 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-073809 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (101.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-988083 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0531 19:14:58.916439   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-988083 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m41.486674623s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (101.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-988083 -- rollout status deployment/busybox: (3.95088156s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-t6fcj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-zw69g -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-t6fcj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-zw69g -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-t6fcj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-zw69g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-t6fcj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-t6fcj -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-zw69g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-988083 -- exec busybox-fc5497c4f-zw69g -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.80s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (38.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-988083 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-988083 -v 3 --alsologtostderr: (37.716600673s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (38.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-988083 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp testdata/cp-test.txt multinode-988083:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1280635764/001/cp-test_multinode-988083.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083:/home/docker/cp-test.txt multinode-988083-m02:/home/docker/cp-test_multinode-988083_multinode-988083-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m02 "sudo cat /home/docker/cp-test_multinode-988083_multinode-988083-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083:/home/docker/cp-test.txt multinode-988083-m03:/home/docker/cp-test_multinode-988083_multinode-988083-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m03 "sudo cat /home/docker/cp-test_multinode-988083_multinode-988083-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp testdata/cp-test.txt multinode-988083-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1280635764/001/cp-test_multinode-988083-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083-m02:/home/docker/cp-test.txt multinode-988083:/home/docker/cp-test_multinode-988083-m02_multinode-988083.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083 "sudo cat /home/docker/cp-test_multinode-988083-m02_multinode-988083.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083-m02:/home/docker/cp-test.txt multinode-988083-m03:/home/docker/cp-test_multinode-988083-m02_multinode-988083-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m03 "sudo cat /home/docker/cp-test_multinode-988083-m02_multinode-988083-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp testdata/cp-test.txt multinode-988083-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1280635764/001/cp-test_multinode-988083-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt multinode-988083:/home/docker/cp-test_multinode-988083-m03_multinode-988083.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083 "sudo cat /home/docker/cp-test_multinode-988083-m03_multinode-988083.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 cp multinode-988083-m03:/home/docker/cp-test.txt multinode-988083-m02:/home/docker/cp-test_multinode-988083-m03_multinode-988083-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 ssh -n multinode-988083-m02 "sudo cat /home/docker/cp-test_multinode-988083-m03_multinode-988083-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-988083 node stop m03: (1.481830692s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-988083 status: exit status 7 (405.937989ms)

                                                
                                                
-- stdout --
	multinode-988083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-988083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-988083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-988083 status --alsologtostderr: exit status 7 (406.630304ms)

                                                
                                                
-- stdout --
	multinode-988083
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-988083-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-988083-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0531 19:17:20.811787   50297 out.go:291] Setting OutFile to fd 1 ...
	I0531 19:17:20.811911   50297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:17:20.811921   50297 out.go:304] Setting ErrFile to fd 2...
	I0531 19:17:20.811925   50297 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0531 19:17:20.812119   50297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18995-11107/.minikube/bin
	I0531 19:17:20.812311   50297 out.go:298] Setting JSON to false
	I0531 19:17:20.812334   50297 mustload.go:65] Loading cluster: multinode-988083
	I0531 19:17:20.812431   50297 notify.go:220] Checking for updates...
	I0531 19:17:20.812739   50297 config.go:182] Loaded profile config "multinode-988083": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.1
	I0531 19:17:20.812756   50297 status.go:255] checking status of multinode-988083 ...
	I0531 19:17:20.813183   50297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:17:20.813255   50297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:17:20.831738   50297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35221
	I0531 19:17:20.832105   50297 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:17:20.832632   50297 main.go:141] libmachine: Using API Version  1
	I0531 19:17:20.832658   50297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:17:20.832944   50297 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:17:20.833170   50297 main.go:141] libmachine: (multinode-988083) Calling .GetState
	I0531 19:17:20.834673   50297 status.go:330] multinode-988083 host status = "Running" (err=<nil>)
	I0531 19:17:20.834687   50297 host.go:66] Checking if "multinode-988083" exists ...
	I0531 19:17:20.835022   50297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:17:20.835064   50297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:17:20.849272   50297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40479
	I0531 19:17:20.849590   50297 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:17:20.849974   50297 main.go:141] libmachine: Using API Version  1
	I0531 19:17:20.849991   50297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:17:20.850269   50297 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:17:20.850443   50297 main.go:141] libmachine: (multinode-988083) Calling .GetIP
	I0531 19:17:20.852903   50297 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:17:20.853300   50297 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:17:20.853340   50297 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:17:20.853432   50297 host.go:66] Checking if "multinode-988083" exists ...
	I0531 19:17:20.853790   50297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:17:20.853833   50297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:17:20.867635   50297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39119
	I0531 19:17:20.867969   50297 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:17:20.868360   50297 main.go:141] libmachine: Using API Version  1
	I0531 19:17:20.868378   50297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:17:20.868656   50297 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:17:20.868818   50297 main.go:141] libmachine: (multinode-988083) Calling .DriverName
	I0531 19:17:20.869015   50297 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:17:20.869033   50297 main.go:141] libmachine: (multinode-988083) Calling .GetSSHHostname
	I0531 19:17:20.871338   50297 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:17:20.871726   50297 main.go:141] libmachine: (multinode-988083) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:aa:8a:02", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:14:59 +0000 UTC Type:0 Mac:52:54:00:aa:8a:02 Iaid: IPaddr:192.168.39.78 Prefix:24 Hostname:multinode-988083 Clientid:01:52:54:00:aa:8a:02}
	I0531 19:17:20.871749   50297 main.go:141] libmachine: (multinode-988083) DBG | domain multinode-988083 has defined IP address 192.168.39.78 and MAC address 52:54:00:aa:8a:02 in network mk-multinode-988083
	I0531 19:17:20.871863   50297 main.go:141] libmachine: (multinode-988083) Calling .GetSSHPort
	I0531 19:17:20.872019   50297 main.go:141] libmachine: (multinode-988083) Calling .GetSSHKeyPath
	I0531 19:17:20.872145   50297 main.go:141] libmachine: (multinode-988083) Calling .GetSSHUsername
	I0531 19:17:20.872284   50297 sshutil.go:53] new ssh client: &{IP:192.168.39.78 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083/id_rsa Username:docker}
	I0531 19:17:20.949483   50297 ssh_runner.go:195] Run: systemctl --version
	I0531 19:17:20.956296   50297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:17:20.973629   50297 kubeconfig.go:125] found "multinode-988083" server: "https://192.168.39.78:8443"
	I0531 19:17:20.973657   50297 api_server.go:166] Checking apiserver status ...
	I0531 19:17:20.973684   50297 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0531 19:17:20.988250   50297 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1131/cgroup
	W0531 19:17:20.997757   50297 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1131/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0531 19:17:20.997801   50297 ssh_runner.go:195] Run: ls
	I0531 19:17:21.002274   50297 api_server.go:253] Checking apiserver healthz at https://192.168.39.78:8443/healthz ...
	I0531 19:17:21.006416   50297 api_server.go:279] https://192.168.39.78:8443/healthz returned 200:
	ok
	I0531 19:17:21.006433   50297 status.go:422] multinode-988083 apiserver status = Running (err=<nil>)
	I0531 19:17:21.006441   50297 status.go:257] multinode-988083 status: &{Name:multinode-988083 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:17:21.006456   50297 status.go:255] checking status of multinode-988083-m02 ...
	I0531 19:17:21.006739   50297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:17:21.006768   50297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:17:21.021305   50297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46761
	I0531 19:17:21.021646   50297 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:17:21.022118   50297 main.go:141] libmachine: Using API Version  1
	I0531 19:17:21.022135   50297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:17:21.022465   50297 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:17:21.022642   50297 main.go:141] libmachine: (multinode-988083-m02) Calling .GetState
	I0531 19:17:21.024035   50297 status.go:330] multinode-988083-m02 host status = "Running" (err=<nil>)
	I0531 19:17:21.024050   50297 host.go:66] Checking if "multinode-988083-m02" exists ...
	I0531 19:17:21.024346   50297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:17:21.024376   50297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:17:21.038481   50297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I0531 19:17:21.038840   50297 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:17:21.039264   50297 main.go:141] libmachine: Using API Version  1
	I0531 19:17:21.039287   50297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:17:21.039557   50297 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:17:21.039727   50297 main.go:141] libmachine: (multinode-988083-m02) Calling .GetIP
	I0531 19:17:21.042239   50297 main.go:141] libmachine: (multinode-988083-m02) DBG | domain multinode-988083-m02 has defined MAC address 52:54:00:5b:fc:e5 in network mk-multinode-988083
	I0531 19:17:21.042659   50297 main.go:141] libmachine: (multinode-988083-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:e5", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:16:00 +0000 UTC Type:0 Mac:52:54:00:5b:fc:e5 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-988083-m02 Clientid:01:52:54:00:5b:fc:e5}
	I0531 19:17:21.042683   50297 main.go:141] libmachine: (multinode-988083-m02) DBG | domain multinode-988083-m02 has defined IP address 192.168.39.176 and MAC address 52:54:00:5b:fc:e5 in network mk-multinode-988083
	I0531 19:17:21.042818   50297 host.go:66] Checking if "multinode-988083-m02" exists ...
	I0531 19:17:21.043091   50297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:17:21.043122   50297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:17:21.056814   50297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0531 19:17:21.057189   50297 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:17:21.057611   50297 main.go:141] libmachine: Using API Version  1
	I0531 19:17:21.057635   50297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:17:21.057898   50297 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:17:21.058088   50297 main.go:141] libmachine: (multinode-988083-m02) Calling .DriverName
	I0531 19:17:21.058248   50297 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0531 19:17:21.058270   50297 main.go:141] libmachine: (multinode-988083-m02) Calling .GetSSHHostname
	I0531 19:17:21.060801   50297 main.go:141] libmachine: (multinode-988083-m02) DBG | domain multinode-988083-m02 has defined MAC address 52:54:00:5b:fc:e5 in network mk-multinode-988083
	I0531 19:17:21.061148   50297 main.go:141] libmachine: (multinode-988083-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5b:fc:e5", ip: ""} in network mk-multinode-988083: {Iface:virbr1 ExpiryTime:2024-05-31 20:16:00 +0000 UTC Type:0 Mac:52:54:00:5b:fc:e5 Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:multinode-988083-m02 Clientid:01:52:54:00:5b:fc:e5}
	I0531 19:17:21.061172   50297 main.go:141] libmachine: (multinode-988083-m02) DBG | domain multinode-988083-m02 has defined IP address 192.168.39.176 and MAC address 52:54:00:5b:fc:e5 in network mk-multinode-988083
	I0531 19:17:21.061323   50297 main.go:141] libmachine: (multinode-988083-m02) Calling .GetSSHPort
	I0531 19:17:21.061485   50297 main.go:141] libmachine: (multinode-988083-m02) Calling .GetSSHKeyPath
	I0531 19:17:21.061632   50297 main.go:141] libmachine: (multinode-988083-m02) Calling .GetSSHUsername
	I0531 19:17:21.061761   50297 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18995-11107/.minikube/machines/multinode-988083-m02/id_rsa Username:docker}
	I0531 19:17:21.144481   50297 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0531 19:17:21.159383   50297 status.go:257] multinode-988083-m02 status: &{Name:multinode-988083-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0531 19:17:21.159414   50297 status.go:255] checking status of multinode-988083-m03 ...
	I0531 19:17:21.159776   50297 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0531 19:17:21.159817   50297 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0531 19:17:21.175130   50297 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42915
	I0531 19:17:21.175608   50297 main.go:141] libmachine: () Calling .GetVersion
	I0531 19:17:21.176089   50297 main.go:141] libmachine: Using API Version  1
	I0531 19:17:21.176110   50297 main.go:141] libmachine: () Calling .SetConfigRaw
	I0531 19:17:21.176406   50297 main.go:141] libmachine: () Calling .GetMachineName
	I0531 19:17:21.176616   50297 main.go:141] libmachine: (multinode-988083-m03) Calling .GetState
	I0531 19:17:21.178299   50297 status.go:330] multinode-988083-m03 host status = "Stopped" (err=<nil>)
	I0531 19:17:21.178312   50297 status.go:343] host is not running, skipping remaining checks
	I0531 19:17:21.178318   50297 status.go:257] multinode-988083-m03 status: &{Name:multinode-988083-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (28.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-988083 node start m03 -v=7 --alsologtostderr: (28.03959967s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (28.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-988083 node delete m03: (1.656183072s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (188.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-988083 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-988083 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m8.048643677s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-988083 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (188.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-988083
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-988083-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-988083-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (59.0288ms)

                                                
                                                
-- stdout --
	* [multinode-988083-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-988083-m02' is duplicated with machine name 'multinode-988083-m02' in profile 'multinode-988083'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-988083-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-988083-m03 --driver=kvm2  --container-runtime=crio: (42.072863807s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-988083
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-988083: exit status 80 (200.99462ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-988083 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-988083-m03 already exists in multinode-988083-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-988083-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.08s)

                                                
                                    
x
+
TestScheduledStopUnix (115.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-629563 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-629563 --memory=2048 --driver=kvm2  --container-runtime=crio: (43.577141454s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-629563 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-629563 -n scheduled-stop-629563
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-629563 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-629563 --cancel-scheduled
E0531 19:34:41.966991   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-629563 -n scheduled-stop-629563
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-629563
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-629563 --schedule 15s
E0531 19:34:58.916617   18372 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18995-11107/.minikube/profiles/functional-928436/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-629563
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-629563: exit status 7 (64.019893ms)

                                                
                                                
-- stdout --
	scheduled-stop-629563
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-629563 -n scheduled-stop-629563
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-629563 -n scheduled-stop-629563: exit status 7 (64.135491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-629563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-629563
--- PASS: TestScheduledStopUnix (115.09s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (204.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4004371612 start -p running-upgrade-306027 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4004371612 start -p running-upgrade-306027 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m40.502185011s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-306027 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-306027 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m40.046809371s)
helpers_test.go:175: Cleaning up "running-upgrade-306027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-306027
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-306027: (1.230884494s)
--- PASS: TestRunningBinaryUpgrade (204.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (174.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3442280579 start -p stopped-upgrade-351858 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3442280579 start -p stopped-upgrade-351858 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.413871925s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3442280579 -p stopped-upgrade-351858 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3442280579 -p stopped-upgrade-351858 stop: (2.131705832s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-351858 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-351858 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.355696357s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (174.90s)

                                                
                                    
x
+
TestPause/serial/Start (98.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-402559 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-402559 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m38.286937045s)
--- PASS: TestPause/serial/Start (98.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.51s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-402559 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-402559 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (33.484745934s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-351858
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-448728 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-448728 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (59.349583ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-448728] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18995
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18995-11107/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18995-11107/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (45.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-448728 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-448728 --driver=kvm2  --container-runtime=crio: (45.082257128s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-448728 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (45.32s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-402559 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-402559 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-402559 --output=json --layout=cluster: exit status 2 (234.937687ms)

                                                
                                                
-- stdout --
	{"Name":"pause-402559","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-402559","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-402559 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-402559 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-402559 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-448728 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-448728 --no-kubernetes --driver=kvm2  --container-runtime=crio: (42.258364174s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-448728 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-448728 status -o json: exit status 2 (242.281502ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-448728","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-448728
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-448728: (1.038612181s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-448728 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-448728 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.518654374s)
--- PASS: TestNoKubernetes/serial/Start (46.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-448728 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-448728 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.371964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-448728
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-448728: (1.32115417s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (62.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-448728 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-448728 --driver=kvm2  --container-runtime=crio: (1m2.392511116s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (62.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-448728 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-448728 "sudo systemctl is-active --quiet service kubelet": exit status 1 (202.445656ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    

Test skip (32/210)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard